The following, if enabled by configuration, will produce data for an associated kafka topic:
Description | Topic Name | Component | Configuration details |
---|---|---|---|
Call and Conference Detail Records (CDRs) output by BVR | blueworx.cdr | BVR | CDR Configuration Options |
Application logging (high level application events and user generated logging) | blueworx.application | BVR | Log Configuration Options |
Errors, Warnings and emergency Informational messages | blueworx.error | BVR, BRM, BAM, BSM |
BRM Trace Configuration Options |
Low level tracing | blueworx.trace | BVR, BRM, BAM, BSM |
BRM Trace Configuration Options |
Status messages | blueworx.status | BRM | Status Configuration Options |
License messages | blueworx.license | BRM | License Configuration Options |
Event messages | blueworx.event | BVR | Kafka Configuration Options |
Before using Apache Kafka to process the data streams that you have configured in BVR you must create the topics in Apache Kafka that you wish to use.
Creating Apache Kafka topics
The Apache Kafka Quickstart guide details how to create a simple topic, using broker defaults. An example for each supported topic follows:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic blueworx.cdr bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic blueworx.application bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic blueworx.error bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic blueworx.trace bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic blueworx.status bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic blueworx.license bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic blueworx.event
Additional topic parameters can be specified to override the server defaults. These parameters are detailed in the Apache Kafka documentation - Topic-level configs.
Retention policies
When creating a topic consider the topic retention policy. The topic parameters will determine how long segments (messages) are kept in a topic (retention.ms), whether segments are deleted or compressed (cleanup.policy), at what interval the deleted messages are flushed (flush.ms), etc.
When deciding on a retention policy, consider if the data written to the topic is critical to your organization, and if it is you must be sure that the required Apache Kafka consumers are able to read the data from the topic before it is deleted or you will lose the data. You may want to alter the retention policy to ensure this and implement mechanisms to manage consumers used to read the data.
Multi-broker cluster configuration
To ensure high availability in your Apache Kafka environment it is recommended that you set up a multi-broker Kafka cluster, as detailed in Installing Apache Kafka (optional). When operating in a multi-broker cluster you will need to change the replication-factor parameter. The Apache Kafka documentation states:
"The replication factor controls how many servers will replicate each message that is written. If you have a replication factor of 3 then up to 2 servers can fail before you will lose access to your data. We [Apache Kafka project] recommend you use a replication factor of 2 or 3 so that you can transparently bounce machines without interrupting data consumption."
See Apache Kafka :: Adding and removing topics for more details.
Modifying topic configuration
If you need to modify the topic configuration, see Apache Kafka :: Modifying topics in the Apache Kafka documentation.