Tag: docker

NEO4J: APOC Plugin in Docker Container

I wanted to test some of the Awesome Procedures on Cypher (APOC) functions, but first I needed to install the plugins. I needed to modify my docker run statement slightly to persist the /plugins directory and install APOC core:

 

docker run -dit -p 7474:7474 -p 7687:7687 --volume=/docker/neo4j/data:/data --volume=/docker/neo4j/conf:/var/lib/neo4j/conf --volume=/docker/neo4j/plugins:/plugins -e NEO4J_AUTH=none -e NEO4J_apoc_export_file_enabled=true -e NEO4J_apoc_import_file_enabled=true -e NEO4J_apoc_import_file_use__neo4j__config=true -e NEO4J_PLUGINS=\[\"apoc\"\] --name neo4j neo4j

Running a Docker Container without *RUNNING* The Container

I needed to get files from a container image that I couldn’t actually start (not enough memory, and finding a box with more memory wasn’t a reasonable option) — fortunately, you can override the container entrypoint to start the container without actually running whatever the container would normally run.

docker run -ti --entrypoint=bash imageName

Quick Docker Test

I’m building a quick image to test permissions to a folder structure for a friend … so I’m making a quick note about how I’m building this test image so I don’t have to keep re-typing it all.

FROM python:3.7-slim

RUN apt-get update && apt-get install -y curl --no-install-recommends

RUN mkdir -p /srv/ljr/test

# Create service account
RUN useradd --comment "my service account" -u 30001 -g 30001 --shell /sbin/nologin --no-create-home webuser
# Copy a bunch of stuff various places under /srv
# Set ownership on /srv folder tree
RUN chown -R 30001:30001 /srv

USER 30001:30001
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]

build the image

docker build -t “ljr:latest” .

Run a transient container that is deleted on exit

docker run –rm -it –entrypoint “/bin/bash” ljr

ElasticSearch Java Heap Size — Order of Precedence

I was asked to look at a malfunctioning ElasticSearch server this week. It’s a lab sandbox, so not a huge deal … but still something they wanted online and functioning for some proof-of-concept testing. And, as a bonus, they’re willing to let us use their lab servers for our own sandboxing (IPv6 implementation, ELK upgrades). There were a handful of problems (it looks like the whole thing used to be a multi-node cluster but the second cluster node has vanished without any trace, the entire platform was re-IP’d, vm.max_map_count wasn’t set on the Docker server, and the logstash folder had a backup of a pipeline config in /path/to/logstash/pipeline/ … which still loads, so causes a continual stream of port-in-use exceptions). But the biggest problem was that any attempt at actually using the ElasticSearch server resulted in it falling over. Java crashed with an out of memory error because the heap space was exhausted. Now I’ve had really small sandboxes before — my first ES sandbox only had a gig of memory and was quite prone to this crash. But the lab server they’ve set up has 64GB of memory. So allocating a few gigs seemed like a quick solution.

The jvm.options file and jvm.options.d folder weren’t mounted into the container — they were the default files held within the container. Which seemed odd, and I made a mental note that it was something we’d need to either mount in or update again when the container gets updated. But no matter how much heap space I allocated, ES crashed.

I discovered that the Docker deployment set an ENV variable for ES_JAVA_OPTS — something which, per the ElasticSearch documentation, overrides all other JVM options. So no matter what I was putting into the jvm options file, the 256 meg set in the ENV was actually being used.

Luckily it’s not terribly difficult to modify the ENV’s within an existing container. You could, of course, redeploy the container with the new settings (and I’ll do that next time, since I’ve also got to get IPv6 enabled). But I wasn’t planning on making any other changes.

Docker – Changing an Existing Container

I’ll start by acknowledging that, of course, you could just redeploy the container with the settings you want now. The whole point of containerized development is that anything “good” should either be part of the deployment settings or data persisted outside of the container. So, in theory, redeploying the container every day shouldn’t really be detectable. Even when you didn’t deploy the original container (i.e. you don’t have the Dockerfile or docker run command handy to tweak as needed), you can reverse engineer what you need from docker inspect. But sometimes? It’s quicker/easier/more convenient to just fix what you need to within the existing container. And it is possible to do so.

The trickiest part is finding the right file to edit.

# cd into docker container definition folder
cd /var/lib/docker/containers/
# find the guid for the container you want to edit
docker ps
# Find the corresponding folder name
ls -al | grep bc9dc66882af
# cd into that folder
cd bc9dc66882af18f59c209faf10031fe21765571d0a2fe4a32a768a1d52cc1e53
# Edit the config.v2.json file for the container
vi config.v2.json
# And, finally, restart docker
systemctl stop docker
systemctl start docker

Building a Docker Image Without Internet Access (kinda)

We’ve been moving a bunch of servers into magic cloudy land. A move which comes with a whole lot of additional security restrictions (and accompanying marketing that the cloud is so much more secure … uhh, no … any host to which you used a keylogged jump server to access & it had absolutely no access to the internal or external network without a specific request and firewall rule would be equally secure. You just haven’t bothered with all of those controls before!). As such, we cannot just pull an image from Docker Hub. We also cannot just use apt-get to install/update packages.

So … how can you build a Docker image with updated software for use on these locked down boxes? Bit of a trick question — you cannot. You can, however, build such an image elsewhere and then export/import the image.

Build the image using your Dockerfile

docker build -t my_app_base .

Then export the image you created

docker save my_api_base | gzip > myapp_image.tar.gz

Download the TGZ file and transfer it to your restricted-access host. Then import the image

docker load < myapp_image.tar.gz

Verify the image loaded successfully

[user@server ~]$ docker images
REPOSITORY              TAG         IMAGE ID      CREATED            SIZE
localhost/my_app_base  latest      5z8e35z99d5z  19 minutes ago     995 MB
<none>                  <none>     5zdbcfz6393z  About an hour ago  1.01 GB

Now use docker run with your my_app_base:latest image to create a running container based on the image.

Setting up redis sandbox

To set up my redis sandbox in Docker, I created two folders — conf and data. The conf will house the SSL stuff and configuration file. The data directory is used to store the redis data.

I first needed to generate a SSL certificate. The public and private keys of the pair are stored in a pem and key file. The public key of the CA that signed the cert is stored in a “ca” folder.

Then I created a redis configuation file — note that the paths are relative to the Docker container

################################## MODULES #####################################

################################## NETWORK #####################################
# My web server is on a different host, so I needed to bind to the public 
#   network interface. I think we'd *want* to bind to localhost in our
#   use case. 
# bind 127.0.0.1
# Similarly, I think we'd want 'yes' here
protected-mode no

# Might want to use 0 to disable listening on the unsecure port
port 6379


tcp-backlog 511
timeout 10
tcp-keepalive 300
################################# TLS/SSL #####################################
tls-port 6380

tls-cert-file /opt/redis/ssl/memcache.pem
tls-key-file /opt/redis/ssl/memcache.key
tls-ca-cert-dir /opt/redis/ssl/ca

# I am not auth'ing clients for simplicity
tls-auth-clients no
tls-auth-clients optional

tls-protocols "TLSv1.2 TLSv1.3"
tls-prefer-server-ciphers yes
tls-session-caching no

# These would only be set if we were setting up replication / clustering
# tls-replication yes
# tls-cluster yes
################################# GENERAL #####################################
# This is for docker, we may want to use something like systemd here. 
daemonize no
supervised no

#loglevel debug
loglevel notice

logfile "/var/log/redis.log"
syslog-enabled yes
syslog-ident redis
syslog-facility local0

# 1 might be sufficient -- we *could* partition different apps into different databases
# But I'm thinking, if our keys are basically "user:target:service" ... then report_user:RADD:Oracle
# from any web tool would be the same cred. In which case, one database suffices. 
databases 3
################################ SNAPSHOTTING  ################################
save 900 1
save 300 10
save 60 10000

stop-writes-on-bgsave-error yes

rdbcompression yes
rdbchecksum yes

dbfilename dump.rdb


# 
dir ./

################################## SECURITY ###################################
# I wasn't setting up any sort of authentication and just using the facts that
#  (1) you are on localhost and
#  (2) you have the key to decrypt the stuff we stash
#  to mean you are authorized. 

############################## MEMORY MANAGEMENT ################################
# This is what to evict from the dataset when memory is maxed
maxmemory-policy volatile-lfu
############################# LAZY FREEING ####################################

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no

############################ KERNEL OOM CONTROL ##############################
oom-score-adj no
############################## APPEND ONLY MODE ###############################

appendonly no
appendfsync everysec

no-appendfsync-on-rewrite no

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

aof-load-truncated yes

aof-use-rdb-preamble yes

############################### ADVANCED CONFIG ###############################
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

list-max-ziplist-size -2

list-compress-depth 0

set-max-intset-entries 512

zset-max-ziplist-entries 128
zset-max-ziplist-value 64

hll-sparse-max-bytes 3000

stream-node-max-bytes 4096
stream-node-max-entries 100

activerehashing yes

client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

dynamic-hz yes

aof-rewrite-incremental-fsync yes

rdb-save-incremental-fsync yes

########################### ACTIVE DEFRAGMENTATION #######################
# Enabled active defragmentation
activedefrag no

# Minimum amount of fragmentation waste to start active defrag
active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
active-defrag-threshold-lower 10

Once I had the configuration data set up, I created the container. I’m using port 6380 for the SSL connection. For the sandbox, I also exposed the clear text port. I mapped volumes for both the redis data, the SSL files, and the redis.conf file

docker run --name redis-srv -p 6380:6380 -p 6379:6379 -v /d/docker/redis/conf/ssl:/opt/redis/ssl -v /d/docker/redis/data:/data -v /d/docker/redis/conf/redis.conf:/usr/local/etc/redis/redis.conf -d redis redis-server /usr/local/etc/redis/redis.conf --appendonly yes

Voila, I have a redis server ready. Quick PHP code to ensure it’s functional:

<?php

$sodiumKey   = random_bytes(SODIUM_CRYPTO_SECRETBOX_KEYBYTES); // 256 bit
$sodiumNonce = random_bytes(SODIUM_CRYPTO_SECRETBOX_NONCEBYTES); // 24 bytes

#print "Key:\n";
#print sodium_bin2hex($sodiumKey);
#print"\n\nNonce:\n";
#print sodium_bin2hex($sodiumNonce);
#print "\n\n";

$redis = new Redis();
$redis->connect('tls://memcached.example.com', 6380); // enable TLS
//check whether server is running or not
echo "<PRE>Server is running: ".$redis->ping()."\n</pre>";

$checks = array(
    "credValueGoesHere",
        "cred2",
        "cred3",
        "cred4",
        "cred5"
);

#$ciphertext = safeEncrypt($message, $key);
#$plaintext = safeDecrypt($ciphertext, $key);

foreach ($checks as $i => $value) {
    usleep(100);
    $key = 'credtest' . $i;
    $strCryptedValue =  base64_encode(sodium_crypto_secretbox($value, $sodiumNonce, $sodiumKey));
    $redis->setEx($key, 1800, $strCryptedValue);        // 30 minute timeout
}

echo "<UL>\n";
for($i = 0; $i < count($checks); $i++){
        $key = 'credtest'.$i;
        $strValue = sodium_crypto_secretbox_open(base64_decode($redis->get($key)),$sodiumNonce, $sodiumKey);
        echo "<LI>The value on key $key is: $strValue \n";
}
echo "</UL>\n";

echo "<P>\n";
echo "<P>\n";
echo "<UL>\n";
$objAllKeys = $redis->keys('*');        // all keys will match this.
foreach($objAllKeys as $objKey){
        print "<LI>The key $objKey has a TTL of " . $redis->ttl($objKey) . "\n";
}
echo "</UL>\n";
foreach ($checks as $i => $value) {
    usleep(100);
        $value = $value . "-updated";
    $key = 'credtest' . $i;
    $strCryptedValue =  base64_encode(sodium_crypto_secretbox($value, $sodiumNonce, $sodiumKey));
    $redis->setEx($key, 60, $strCryptedValue);          // 1 minute timeout
}

echo "<UL>\n";
for($i = 0; $i < count($checks); $i++){
        $key = 'credtest'.$i;
        $strValue = sodium_crypto_secretbox_open(base64_decode($redis->get($key)),$sodiumNonce, $sodiumKey);
        echo "<LI>The value on key $key is: $strValue \n";
}
echo "</UL>\n";


echo "<P>\n";
echo "<UL>\n";
$objAllKeys = $redis->keys('*');        // all keys will match this.
foreach($objAllKeys as $objKey){
        print "<LI>The key $objKey has a TTL of " . $redis->ttl($objKey) . "\n";
}
echo "</UL>\n";



foreach ($checks as $i => $value) {
    usleep(100);
        $value = $value . "-updated";
    $key = 'credtest' . $i;
    $strCryptedValue =  base64_encode(sodium_crypto_secretbox($value, $sodiumNonce, $sodiumKey));
    $redis->setEx($key, 1, $strCryptedValue);          // 1 second timeout
}


echo "<P>\n";
echo "<UL>\n";
$objAllKeys = $redis->keys('*');        // all keys will match this.
foreach($objAllKeys as $objKey){
        print "<LI>The key $objKey has a TTL of " . $redis->ttl($objKey) . "\n";
}
echo "</UL>\n";

sleep(5); // Sleep so data ages out of redis
echo "<UL>\n";
for($i = 0; $i < count($checks); $i++){
        $key = 'credtest'.$i;
        $strValue = sodium_crypto_secretbox_open(base64_decode($redis->get($key)),$sodiumNonce, $sodiumKey);
        echo "<LI>The value on key $key is: $strValue \n";
}
echo "</UL>\n";


?>

Memcached with TLS in Docker

To bring up a Docker container running memcached with SSL enabled, create a local folder to hold the SSL key. Mount a volume to your local config folder, then point the ssl-chain_cert and ssl_key to the in-container paths to the PEM and KEY file:

 

docker run -v /d/docker/memcached/config:/opt/memcached.config -p 11211:11211 –name memcached-svr -d memcached memcached -m 64 –enable-ssl -o ssl_chain_cert=/opt/memcached.config/localcerts/memcache.pem,ssl_key=/opt/memcached.config/localcerts/memcache.key -v

Docker and Windows — Unable to Allocate Port

On the most recent iteration of Windows (20H2 build 19042.1052) and Docker Desktop (20.10.7 built Wed Jun 2 11:54:58 2021), I found myself unable to launch my Oracle container. The error indicated that the binding was forbidden.

 

C:\WINDOWS\system32>docker start oracleDB
Error response from daemon: Ports are not available: listen tcp 0.0.0.0:1521: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
Error: failed to start containers: oracleDB

Forbidden by whom?! Windows, it seems. Checking excluded ports using netsh:

netsh int ipv4 show excludedportrange protocol=tcp

Shows that there are all sorts of ports being forbidden — Hyper-V is grabbing a lot of ports when it starts. To avoid that, you’ve got to add a manual excluded port for the one you want to use.

To reserve the port for your own use, disable Hyper-V (reboot), add a port exclusion, and enable Hyper-V (reboot)

REM Disable Hyper-V
dism.exe /Online /Disable-Feature:Microsoft-Hyper-V 
REM REBOOT ... then add an exclusion for the Oracle DB Port
netsh int ipv4 add excludedportrange protocol=tcp startport=1521 numberofports=1 
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All
REM REBOOT again

Now 1521 is reserved for Oracle