Another pretty buggy bug

We spent a morning trying to figure out why containers in a new installation of Swarm just couldn’t talk to each other. Overlay network looked fine. Firewall looked fine. You could get from the host to the container, just not from the container to a container on the other server. So … here’s a bug where your swarm (i.e. the thing you do when you want docker stuff to run across more than one server) cannot actually, ya know, talk to the other servers. Sigh!

https://github.com/moby/moby/issues/41775

Communicating With Kafka Server Using SSL

Update the Client Configuration

Use the keytool command to create a trust store with the CA chain used in your certificates. I am using Venafi, so I need to import two CA public keys:

keytool -keystore kafka.truststore.jks -alias SectigoRoot -import -file "Sectigo RSA Organization Validation Secure Server CA.crt"
keytool -keystore kafka.truststore.jks -alias UserTrustRoot -import -file "USERTrust RSA Certification Authority.crt"

Update the Client Configuration

Create a producer-ssl.properties or consumer-ssl.properties based on your current producer/consumer properties file. Update the port – 9095 is used for SSL – and append the following lines

security.protocol=SSLssl.truststore.location=/path/to/kafka.truststore.jks
ssl.truststore.password=<WhateverYouSetInThePreviousStep>

Using the CLI Client Tools

Once you have a property configured properties file, you can invoke either the kafka-console-consumer.sh or kafka-console-producer.sh scripts indicating your new properties file:

/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka1586.example.net:9095 --topic LJRTest --consumer.config /kafka/config/consumer-ssl.properties --group LJR5

/kafka/bin/kafka-console-producer.sh --bootstrap-server kafka1586.example.net:9095 --topic LJRTest --producer.config /kafka/config/producer-ssl.properties

To debug SSL communication, set the following KAFKA_OPTS prior to invoking the command line producer/consumer utilities:

export KAFKA_OPTS="-Djavax.net.debug=ssl,handshake"

Adding SSL To Kafka Server

Obtain SSL Certificates for Each Server

The following process was used to enable SSL communication with the Kakfa servers. Firstly, generate certificates for each server in the environment. I am using a third-party certificate provider, Venafi. When you download the certificates, make sure to select the “PEM (OpenSSL)” format and check the box to “Extract PEM content into separate files (.crt, .key)”

Upload each zip file to the appropriate server under /tmp/ named in the $(hostname).zip format. The following series of commands creates the files needed in the Kafka server configuration. You will be asked to set passwords for the keystore and truststore JKS files. Don’t forget what you use — we’ll need them later.

# Assumes Venafi certificates downloaded as OpenSSL zip files with separate public/private keys are present in /tmp/$(hostname).zip
mkdir /kafka/config/ssl/$(date +%Y)
cd /kafka/config/ssl/$(date +%Y)
mv /tmp/$(hostname).zip ./
unzip $(hostname).zip

# Create keystore for Kakfa
openssl pkcs12 -export -in $(hostname).crt -inkey $(hostname).key -out $(hostname).p12 -name $(hostname) -CAfile ./ca.crt -caname root
keytool -importkeystore -destkeystore $(hostname).keystore.jks -srckeystore $(hostname).p12 -srcstoretype pkcs12 -alias $(hostname)

# Create truststore from CA certs
keytool -keystore kafka.server.truststore.jks -alias SectigoRoot -import -file "Sectigo RSA Organization Validation Secure Server CA.crt"
keytool -keystore kafka.server.truststore.jks -alias UserTrustRoot -import -file "USERTrust RSA Certification Authority.crt"

# Fix permissions
chown -R kafkauser:kafkagroup /kafka/config/ssl

# Create symlinks for current-year certs
cd ..
ln -s /kafka/config/ssl/$(date +%Y)/$(hostname).keystore.jks /kafka/config/ssl/kafka.keystore.jks
ln -s /kafka/config/ssl/$(date +%Y)/kafka.server.truststore.jks /kafka/config/ssl/kafka.truststore.jks

By creating symlinks to the active certs, you can renew the certificates by creating a new /kafka/config/ssl/$(date +%Y) folder and updating the symlink. No change to the configuration files is needed.

Update Kafka server.properties to Use SSL

Append a listener prefixed with SSL:// to the existing listeners – as an example:

#2024-03-27 LJR Adding SSL port on 9095
#listeners=PLAINTEXT://kafka1587.example.net:9092
#advertised.listeners=PLAINTEXT://kafka1587.example.net:9092
listeners=PLAINTEXT://kafka1587.example.net:9092,,SSL://kafka1587.example.net:9095
advertised.listeners=PLAINTEXT://kafka1587.example.net:9092,SSL://kafka1587.example.net:9095

Then add configuration values to use the keystore and truststore, specify which SSL protocols will be permitted, and set whatever client auth requirements you want:

ssl.keystore.location=/kafka/config/ssl/kafka.keystore.jks
ssl.keystore.password=<WhateverYouSetEarlier>
ssl.truststore.location=/kafka/config/ssl/kafka.truststore.jks
ssl.truststore.password=<WhateverYouSetForThisOne>
ssl.enabled.protocols=TLSv1.2,TLSv1.3
ssl.client.auth=none # Or whatever auth setting you require

Save the server.properties file and use “systemctl restart kafka” to restart the Kafka service.

Update Firewall Rules to Permit Traffic on New Port

firewall-cmd –add-port=9095/tcp
firewall-cmd –add-port=9095/tcp –permanent

Bacon

I read this crazy way of cooking bacon — so much better than carefully spreading three slices out across the pan and cooking in batches. You take the whole package (or whatever portion thereof you wish to cook). Separate the slices, but pile them up loosely in the pan over medium heat.

This takes 15-25 minutes — just let it cook, stirring occasionally.

Then remove your beautiful, crispy, curly bacon slices to a paper towel to drain.

OpenSearch Proof of Concept In-Place Upgrade from ElasticSearch 7.7.0 to OpenSearch 2.12.0

I need to migrate my ElasticSearch installation over to OpenSearch. From reading the documentation, it isn’t really clear if that is even possible as an in-place upgrade or if I’d need to use a remote reindex or snapshot backup/restore. So I tested the process with a minimal data set. TL;DR: Yes, it works.

Create a docker instance of ElasticSearch 7.7.0

mkdir /docker/es/esdata
chmod -R g+dwx /docker/es/esdata
chgrp -R 0 /docker/es/esdata

mkdir /docker/es/esconfig

Populate configuration info into ./esconfig and ./esdata is an empty directory

docker run –name es770 -dit -v /docker/es/esdata:/usr/share/elasticsearch/data -v /docker/es/esconfig:/usr/share/elasticsearch/config -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” docker.elastic.co/elasticsearch/elasticsearch:7.7.0

Populate Data into ElasticSearch Sandbox

Use curl to populate an index with some records – you can create lifecycle policies, customize the fields, etc … this is the bare minimum to validate that data in ES7.7 can be ingested by OS2.12curl -X POST “localhost:9200/ljrtest/_bulk” -H “Content-Type: application/x-ndjson” -d’
{“index”: {“_id”: “1”}}
{“id”: “1”, “message”: “Record one”}
{“index”: {“_id”: “2”}}
{“id”: “2”, “message”: “Record two”}
{“index”: {“_id”: “3”}}
{“id”: “3”, “message”: “Record three”}
{“index”: {“_id”: “4”}}
{“id”: “4”, “message”: “Record four”}
{“index”: {“_id”: “5”}}
{“id”: “5”, “message”: “Record five”}
{“index”: {“_id”: “6”}}
{“id”: “6”, “message”: “Record six”}
{“index”: {“_id”: “7”}}
{“id”: “7”, “message”: “Record seven”}
{“index”: {“_id”: “8”}}
{“id”: “8”, “message”: “Record eight”}
{“index”: {“_id”: “9”}}
{“id”: “9”, “message”: “Record nine”}
{“index”: {“_id”: “10”}}
{“id”: “10”, “message”: “Record ten”}

Shut Down ElasticSearch

docker stop es770

Bring Up an OpenSearch 2.12 Host

mkdir /docker/es/osconfig

Populate the configuration data for OpenSearch in ./osconfig

docker run –name os212 -dit -v /docker/es/esdata:/usr/share/opensearch/data -v /docker/es/osconfig:/usr/share/opensearch/config -p 9200:9200 -p 9600:9600 -e “discovery.type=single-node” -e “OPENSEARCH_INITIAL_ADMIN_PASSWORD=P@s5w0rd-123” opensearchproject/opensearch:2.12.0

Verify Data is Still Available in OpenSearch

[root@docker es]# curl -k -u “admin:P@s5w0rd-123” https://localhost:9200/ljrtest
{“ljrtest”:{“aliases”:{},”mappings”:{“properties”:{“id”:{“type”:”text”,”fields”:{“keyword”:{“type”:”keyword”,”ignore_above”:256}}},”message”:{“type”:”text”,”fields”:{“keyword”:{“type”:”keyword”,”ignore_above”:256}}}}},”settings”:{“index”:{“creation_date”:”1710969477402″,”number_of_shards”:”1″,”number_of_replicas”:”1″,”uuid”:”AO5JBoyzSJiKZA9xeA2imQ”,”version”:{“created”:”7070099″,”upgraded”:”136337827″},”provided_name”:”ljrtest”}}}}

Conclusion

Yes, a very basic data set in ElasticSearch 7.7.0 can be upgraded in-place to OpenSearch 2.12.0 — in the “real world” compatibility issues will crop up (flatten!!), but the idea is fundamentally sound.

Problem, though, is compatibility issues. We don’t have exotic data types in our instance but Kibana uses “flatten” … so those rare people use use Kibana to access and visualize their data really cannot just move to OpenSearch. That’s a huge caveat. I can recreate everything manually after deleting all of the Kibana indices (and possibly some more, haven’t gone this route to see). But if I’m going to recreate everything, why wouldn’t I recreate everything and use remote reindex to move data? I can do this incrementally — take a week to move all the data slowly, do a catch-up reindex t-2 days, another t-1 days, another the day of the change, heck even one a few hours before the change. Then the change is a quick delta reindex, stop ElasticSearch, and swap over to OpenSearch. The backout is to just swing back to the fully functional, unchanged ElasticSearch instance.