Discussions

Ask a Question

TTL at bucket level is not working

We have tried to set the TTL value with this command "/usr/lib/ddb/bin/ddb-admin buckets ttl <bucketname> 1h" but the old data is still present. Can some one help how to purge the old data from the dalmatinerdb.

TTL at bucket level is not working

We have tried to set the TTL value with this command "/usr/lib/ddb/bin/ddb-admin buckets ttl <bucketname> 1h" but the old data is still present. Can some one help how to purge the old data from the dalmatinerdb.

read data from binary API

Hello, As explained in a previous ticket, I am writing a java client optimized for an embedded system. The client makes usage of the TCP protocol an is able at this moment to write single points, get bucket list, get bucket information, delete bucket. I am now trying to read points, from the documentation I understand that the binary packet that has to be sent over TCP, need to observe the following shape : <<?GET, %% The Size of the bucket binary and the bucket itself BucketSize:?BUCKET_SS/?SIZE_TYPE, Bucket:BucketSize/binary, %% The Size of the metric binary and the bucket itself MetricSize:?METRIC_SS/?SIZE_TYPE, Metric:MetricSize/binary, %% The start time to read from (given in bucket resolution) Time:?TIME_SIZE/?SIZE_TYPE, %% The number of points to read. Count:?COUNT_SIZE/?SIZE_TYPE >> By doing so, I am able to receive an answer from server, but the received packet contains only zero values. See bellow an example that queries 6 points: Sent packet: 0000001c 0206056a61766137 000605736f6d6538 000000005b617f1a 00000006 Received packet: 00000038 00000000000003e8 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 I understand that the received values have to be decoded first, but at this moment they are all equal to zero, even though I have written non-zero values (that i can see with dfe). Moreover when I look at the Scala client, I see that the packet format is not similar to the one described in the official documentation. Scala packet for reading data (between lines 75-83 of ddb_client_scala/src/main/scala/dalmatinerdb/client/Protocol.scala) : val query: Codec[Query] = { ("sentry" | sentry(MessageTypes.Query) ) :: ("bucket" | bucket ) :: ("metric" | metric ) :: ("time" | timestamp ) :: ("count" | uint32 ) :: ("rr" | uint8 ) :: ("r" | uint8 ) }.dropUnits.as[Query] In the scala query there is two additional fields: "rr" and "r", which are not described in the documentation. Is there something wrong with how my packet is formatted ? If I could have any hint that would be great. Regards

set bucket resolution (new open source java client for embedded system)

Dear Heinz, I am writing an open source Java Client, in order to use DDB in one of our embedded system. I know that there is already a Scala client, but since my goal is to run on a very limited environment (hardware wise), it would be good to develop from scratch in order to have full control. The client is running under Ubuntu 18.04.1 LTS and being tested with the released deb package of ddb and dfe. The client send queries over tcp socket and have the following methods at this moment : - getHost(): String - setPort(): int - setHost(String) - setPort(int) - write(String, String, long, int): void (If mentioned bucket/metric doesn't exists it's created) - write(String, String, long, int[]): void - listBuckets(): String[] - listMetrics(String): String[] - getBucketInfo(String): Long[] - deleteBucket(String): void I am now wondering how to set bucket resolution. With my current configuration, each time I create a new bucket it has a default ppf of 604800s and a default resolution of 1000ms. From the documentation I understand that the ppf can be modified through the ddb.conf, which I have succeeded to, but i don't get yet how to modify resolution. That would be great if I could have a hint. It would allow me to develop further the client but also deploy it on my embedded system (which i think could be a good demonstrator to exhibit ddb performance). Regards

Problem building on Linux Mint

I just want to give this a try. It sounds kind of cool, leveraging zfs and all, but I ran into a couple of compilation problems. I am stuck on this one now: [email protected] ~/dalmatinerdb $ make [ -f .git/hooks ] && cp hooks/pre-commit .git/hooks || true /home/bob/dalmatinerdb/rebar3 compile ===> Verifying dependencies... ===> Compiling otters ===> Compiling _build/default/lib/otters/src/of_parser.erl failed ../../Users/heinz/Projects/fifo/forks/otters/_build/default/lib/otters/src/of_parser.erl:185: attribute 'dialyzer' after function definitions ../../Users/heinz/Projects/fifo/forks/otters/_build/default/lib/otters/src/of_parser.erl:267: attribute 'dialyzer' after function definitions It seems that there are references to files that I don't have. Any help would be appreciated.

Problem building on Linux Mint

I just want to give this a try. It sounds kind of cool, leveraging zfs and all, but I ran into a couple of compilation problems. I am stuck on this one now: [email protected] ~/dalmatinerdb $ make [ -f .git/hooks ] && cp hooks/pre-commit .git/hooks || true /home/bob/dalmatinerdb/rebar3 compile ===> Verifying dependencies... ===> Compiling otters ===> Compiling _build/default/lib/otters/src/of_parser.erl failed ../../Users/heinz/Projects/fifo/forks/otters/_build/default/lib/otters/src/of_parser.erl:185: attribute 'dialyzer' after function definitions ../../Users/heinz/Projects/fifo/forks/otters/_build/default/lib/otters/src/of_parser.erl:267: attribute 'dialyzer' after function definitions It seems that there are references to files that I don't have. Any help would be appreciated.

Problem building on Linux Mint

I just want to give this a try. It sounds kind of cool, leveraging zfs and all, but I ran into a couple of compilation problems. I am stuck on this one now: [email protected] ~/dalmatinerdb $ make [ -f .git/hooks ] && cp hooks/pre-commit .git/hooks || true /home/bob/dalmatinerdb/rebar3 compile ===> Verifying dependencies... ===> Compiling otters ===> Compiling _build/default/lib/otters/src/of_parser.erl failed ../../Users/heinz/Projects/fifo/forks/otters/_build/default/lib/otters/src/of_parser.erl:185: attribute 'dialyzer' after function definitions ../../Users/heinz/Projects/fifo/forks/otters/_build/default/lib/otters/src/of_parser.erl:267: attribute 'dialyzer' after function definitions It seems that there are references to files that I don't have. Any help would be appreciated.

damlatinerfe

Hi, just trying dalmatiner compiled from source on fresh ubuntu 16.04 installation. Compile went fine, dalmatinerdb seems running, but when i'm trying to run ./bin/dalmatinerfe start - as you noted in howto/install, but error mesage appear: vm.args needs to have a -name parameter. -sname is not supported. if i look at dalmatinerfe shell script - it is obvious that it needs /data/dalmatinerfe/etc/vm.args, but there is no clue in your howto about this file Regards, Jan

Using docker images with docker-compose

Hi, At Luminis, we are evaluating time series databases with the intent of integrating one into one of our products. I am currently trying to get Dalmatiner to work using your Docker images, along with a Postgres image, without much success. In another post you mentioned that you no longer have an all-in-one image, but are using docker-compose. Do you have a sample docker compose configuration file available? Regards, Stuart

building the binary

hi, when building the dalmatinerdb binary from source, i had the following error Compiling _build/default/lib/riak_ensemble/src/riak_ensemble_test.erl failed do you know what i need to do to fix? thanks very much!
ANSWERED

Docker "all-in-one" container

I see several containers, but no "all-in-one". Am I missing it?

How can I install DalmatinerDB in Ubuntu 16 lts?

Hello Heinz! I'm trying to install DalmatinerDB in an Ubuntu 16 linux box. The steps on https://dalmatiner.readme.io/docs/ddb-installation don't seem to work. Do you have any up to date shell installation instructions? Thank you for your help! Cheers, Felipe

Exploring

Im looking for a timeseries database that supports large amounts of metrics and tags for dimensional queries. Looks like DalmatinerDB can provide this. I see that documentation is a little sparse and there might be some issue with zfs and project continuity. Im have installed DalmatinerDB in a docker environment just to get started and discover it capabilities. I wonder how i can drop some data after a period of time lets say i only want to keep data for a period 3 months. How do i remove it and can this be automated. I also have to find out can i divide one timeseries by another to calculate failure ratio of errors by total number of events. Finally how can i specify tags on metrics when generating data with logstash.
ANSWERED

How to delete metrics

Is there a command to delete metrics? Did not find it in the documentation