Category: Coding

SSL Connection Failure from Docker Image

We have a script that’s used to securely retrieve passwords … a script which failed when run from a Docker container.

* could not load PEM client certificate, OpenSSL error error:140AB18E:SSL routines:SSL_CTX_use_certificate:ca md too weak, (no key found, wrong pass phrase, or wrong file format?)

Appears root of issue is tied to Debian OS that’s used in the python:3.7-slim container that’s being used. Newer iterations of some Linux OS’s have a default setting in the openssl config that provide a setting for SSL_CTX_set_security_level that precludes communication with password server.

Remediating this at the server end is not a reasonable approach, so client config needs to be changed to allow connection to be established. Setting security level to 1 allows connection to proceed, so proposed including additional instruction in Dockerfile that uses sed to update the configuration parameter.

sed -i 's/DEFAULT@SECLEVEL=2/DEFAULT@SECLEVEL=1/' /etc/ssl/openssl.cnf

Once that setting was updated, the script worked perfectly as it does on our physical and VM servers.

Reporting Last Patch Dates on Fedora / RedHat / CentOS Systems

I needed to verify the last time a bunch of servers were patched — basically to ensure compliance with the stated quarterly patching interval. This python script pulls the list of installed packages and the date for each package, sorts the info by date DESC, and then reports the latest date on any packages — as well as the number of packages updated on that date. If there’s only one … the system still might bear some investigation. But if a couple of dozen packages were updated in the past quarter … we don’t need to be too worried about turning up on the out-of-compliance report.

import subprocess
import re
import datetime
from collections import OrderedDict

def getFirstElement(odictInput):
    '''
        This function returns the first element from an ordered collection (an arbitrary element if an unordered collection is passed in)
        Input -- odictInput -- ordered collection
        Output -- type varies -- first element of ordered collection, arbitrary element of unordered collection

    '''
    return next(iter(odictInput))

listHosts = ['host01.example.com', 'host02.example.com', 'host03.example.com','host04.example.com','host05.example.com']

for strHost in listHosts:
        dictPatchDates = {}

        objResults = subprocess.Popen(['ssh', strHost, 'rpm', '-qa', '--last'],stdout=subprocess.PIPE)
        for strLine in objResults.stdout:
                strPackageInfo  = strLine.decode('utf-8').rstrip()
                listPackageInfo = re.split(r'\s*([a-zA-Z]{3,}\s[0-9]{2,}\s[a-zA-Z]{3,}\s[0-9]{2,})',strPackageInfo)
                strUpdateDate = listPackageInfo[1]
                dateUpdateDate = datetime.datetime.strptime(strUpdateDate, "%a %d %b %Y").date()
                if dictPatchDates.get(dateUpdateDate) is not None:
                        dictPatchDates[dateUpdateDate] = dictPatchDates[dateUpdateDate] + 1
                else:
                        dictPatchDates[dateUpdateDate] = 1

        dictOrderedPatchDates = OrderedDict(sorted(dictPatchDates.items(), key=lambda t: t[0],reverse=True))
        dateLatestPatch = getFirstElement(dictOrderedPatchDates)
        print(f"{strHost}\t{dateLatestPatch}\t{dictOrderedPatchDates[dateLatestPatch]}")

Building Gerbera on Fedora

There is a great deal of documentation available for building Gerbera from source on a variety of Linux flavors. Unfortunately, Fedora isn’t one of those (and the package names don’t exactly match up to let you replace “apt-get” with “yum” and be done). So I am quickly documenting the process we followed to build Gerbera from source.

The Fedora build of Gerbera has the binaries in /usr/bin and the manual build places the gerbera binary in /usr/local/bin — the build updates the unit file to reflect this change, but this means you want to back up any customizations you’ve made to the unit file before running “make install”.

You need the build system — cmake, g++, etc and the devel packages from the following table as required by your build options

Additional packages that we needed to install: automake, autoconf, libtool

Library Fedora Package Required? Note Compile-time option Default
libpupnp libupnp-devel XOR libnpupnp pupnp
libnpupnp Build from source (if needed) XOR libupnp I was only able to locate this as a source, not available from Fedora repos WITH_NPUPNP Disabled
libuuid libuuid-devel Required Not required on *BSD
pugixml pugixml-devel Required XML file and data support
libiconv glibc-headers Required Charset conversion
sqlite3 sqlite-devel Required Database storage
zlib zlib-devel Required Data compression
fmtlib fmt-devel Required Fast string formatting
spdlog spdlog-devel Required Runtime logging
duktape duktape-devel Optional Scripting Support WITH_JS Enabled
mysql mariadb-devel Optional Alternate database storage WITH_MYSQL Disabled
curl libcurl-devel Optional Enables web services WITH_CURL Enabled
taglib taglib-devel Optional Audio tag support WITH_TAGLIB Enabled
libmagic file-devel Optional File type detection WITH_MAGIC Enabled
libmatroska libmatroska-devel Optional MKV metadata required for MKV WITH_MATROSKA Enabled
libebml libebml-devel Optional MKV metadata required for MKV WITH_MATROSKA Enabled
ffmpeg/libav ffmpeg-free-devel Optional File metadata WITH_AVCODEC Disabled
libexif libexif-devel Optional JPEG Exif metadata WITH_EXIF Enabled
libexiv2 exiv2-devel Optional Exif, IPTC, XMP metadata WITH_EXIV2 Disabled
lastfmlib liblastfm-devel Optional Enables scrobbling WITH_LASTFM Disabled
ffmpegthumbnailer ffmpegthumbnailer-devel Optional Generate video thumbnails WITH_FFMPEGTHUMBNAILER Disabled
inotify glibc-headers Optional Efficient file monitoring WITH_INOTIFY
libavformat libavformat-free-devel Required for 2.0
libavutil libavutil-free-devel Required for 2.0
libavcodec libavcodec-free-devel Required for 2.0

Then follow the generalized instructions — cd into the folder where you want to run the build and run (customizing the cmake line as you wish):

git clone https://github.com/gerbera/gerbera.git
mkdir build
cd build
cmake ../gerbera -DWITH_MAGIC=1 -DWITH_MYSQL=1 -DWITH_CURL=1 -DWITH_INOTIFY=1 -DWITH_JS=1 -DWITH_TAGLIB=1 -DWITH_AVCODEC=1 -DWITH_FFMPEGTHUMBNAILER=0 -DWITH_EXIF=1 -DWITH_EXIV2=1 -DWITH_SYSTEMD=1 -DWITH_LASTFM=0 -DWITH_DEBUG=1
make -j4
sudo make install

As with the Gerbera binary, the Fedora build places the web content in /usr/share/gerbera and the manual build places the web content into /usr/local/share/gerbera — yes, you can change the paths in the build, and I’m sure you can clue Gerbera into the new web file location. I opted for the quick/easy/lazy solution of running

mv /usr/share/gerbera /usr/share/gerbera/old
ln -s /usr/local/share/gerbera /usr/share/

To symlink the location my config thinks the web components should be located to the new files.

On the first start of Gerbera, SQL scripts may be run to update the database — don’t stop or kill the service during this process there’s no checkpoint restart of the upgrade process. We backed up /etc/gerbera/gerbera.db prior to starting our Gerbera installation. We’ve also wiped the database files to start from scratch and test changes that impacted how items are ingested into the database.

Fin.

Example Azure DevOps File Deployment

To automatically update files from your repository on your server, use a release pipeline. For convenience, I use deployment groups to ensure all of the servers are updated.

Creating a deployment group

Under the Project, navigate to Pipelines and “deployment groups”. Click “New” and provide a name for the deployment group.

Now click into the deployment group and select “Register”

Since I have a Linux server, I changed the “Type of target to register” drop-down to “Linux”. Copy the command and run it on your server (I don’t run literally what MS provides – I break it out into individual commands so I can make a folder named what I want it to be named and just run the part of the command that registers a service with systemctl.

Run the agent – for demonstration purposes, I am using the run.sh script to launch the agent. This outputs details to my console instead of a log file.

If you have multiple servers to which you want to deploy the files, install and run an agent on each one.

Create the release pipeline

Now we will build the pipeline that actually copies files over to the agent. Under Pipelines, navigate to “Releases”. Select “New” and create a “New release pipeline”. Start with an empty job.

You’ll be asked to name the first stage of the deployment pipeline – here, I’m calling it “Deploy Files to Servers”. Close out of the Stage window to see the pipeline.

Click the “+ Add” next to Artifacts to link an artifact to the deployment

If you have a build pipeline, you can link that as the artifact. Since I am just copying files, I selected the “Azure Repo” and configured the test project that contains the files I wish to deploy to my servers.

Click “Add”

Back in the pipeline, click the “1 job, 0 task” hyperlink to create a file deployment task.

We don’t need the “Agent job”, so click on it and click “Remove”

Select the hamburger menu next to “Deploy files to servers” and select “Add a deployment group job”

Click the “Deployment group” dropdown and select the deployment group that contains the servers to which you want to deploy files. You can add tags to limit deployment to a subset of the deployment group – I don’t do this, but I have seen instances where “prod” and “dev” tags were used and all servers in both the prod and dev environment were part of the same deployment group.

Click the “+” on the “Deployment group job” item to add a task.

Find the “Copy files” task and click “Add” to create a task to copy files.

Click on the “Copy files to:” item to configure the task. The source folder is the Azure repo, and the target folder is the path on the server.

Click “Save” to save the task, then click “OK” to save the task.

Now create a release – click the “Create a release” button

ok

When the deployment runs, the agent will show the job running.

Once the deployment completes, the files are on the server.

Scheduling Release

In the pipeline, you can click on “Schedule set” to schedule new releases.

Enable the schedule, set a time – I select to only schedule the release if the source or pipeline has changed … if I’ve not updated files in the repo, there’s no need to redeploy the files. Remember to save your pipeline when you add the schedule.

Manually Running a JAR File

The java code I now maintain is normally executed through a k8s cluster — this means just testing a quick change requires running the entire deployment pipeline. Sometimes, though, I really just want to test something quickly. In such instances, you can manually run a jar file using “java -jar my_file.jar” —

Maven Build Certificate Error

Attempting to build some Java code, I got a lot of errors indicating a trusted certificate chain was not available:

Could not transfer artifact 
org.springframework.boot:spring-boot-starter-parent:pom:2.2.0.RELEASE 
from/to repo.spring.io (<redacted>): sun.security.validator.ValidatorException: 
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target

And

[ERROR] Failed to execute goal on project errorhandler: 
Could not resolve dependencies for project com.example.npm:errorhandler:jar:0.0.1-SNAPSHOT: 
The following artifacts could not be resolved: 
org.springframework.boot:spring-boot-starter-data-jpa:jar:2.3.7.BUILD-SNAPSHOT, 
org.springframework.boot:spring-boot:jar:2.3.7.BUILD-SNAPSHOT, 
org.springframework.boot:spring-boot-configuration-processor:jar:2.3.7.BUILD-SNAPSHOT: 
Could not transfer artifact org.springframework.boot:spring-boot-starter-data-jpa:jar:2.3.7.BUILD-20201211.052207-37 
from/to spring-snapshots (https://repo.spring.io/snapshot): 
transfer failed for https://repo.spring.io/snapshot/org/springframework/boot/spring-boot-starter-data-jpa/2.3.7.BUILD-SNAPSHOT/spring-boot-starter-data-jpa-2.3.7.BUILD-20201211.052207-37.jar: 
Certificate for <repo.spring.io> doesn't match any of the subject alternative names: [] -> [Help 1]

Ideally, you could just add whatever cert(s) needed to be trusted into the cacerts file for the Java instance using keytool (.\keytool.exe -import -alias digicert-intermed -cacerts -file c:\tmp\digi-int.cer) however the work computers are locked down such that I am unable to import certs into the Java trust store. The second error makes me think it wouldn’t work anyway — if there’s no matching SAN on the cert, trusting the cert wouldn’t do anything.

Fortunately, there are a few flags you can add to mvn to ignore certificate errors — thus allowing the build to complete without requiring access to the cacerts file. There is, of course, a possibility that the trust failure is because your connection is being redirected maliciously … but I see enough other people getting trust failures for this spring-boot stuff (and visiting the site doesn’t show anything suspect) that I’m happy to bypass the security validation this once and just be done with the build 🙂

mvn package -DskipTests -Dmaven.wagon.http.ssl.insecure=true -Dmaven.wagon.http.ssl.allowall=true -Dmaven.wagon.http.ssl.ignore.validity.dates=true jib:build

Postgresql Through an SSH Tunnel in Python

Our production Postgresql servers have a fairly restrictive IP access control list — which means you cannot VPN in and query the server. We’ve been using DBeaver with an SSH tunnel to connect, but it’s a bit time consuming to run a query across all of the servers for monitoring and troubleshooting. To work around the restriction, I built a python script that uses an SSH tunnel to relay communications to the Postgresql servers.

import psycopg2
from sshtunnel import SSHTunnelForwarder

from config import strSSHRelayHost, iSSHRelayPort, strSSHRelayUser, strSSHAuthKeyFile, dictHost
# In the config.py, dictHost should contain the following information
# dictHost = {"host":"dbserver.example.com","port":5432,"database": "dbname", "username":"dbuser", "password":"S3cr3tPhr@5e"}

# Example query -- listing out locks 
sqlQuery = "WITH RECURSIVE l AS (  SELECT pid, locktype, mode, granted, ROW(locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid) obj FROM pg_locks ), pairs AS ( SELECT w.pid waiter, l.pid locker, l.obj, l.mode FROM l w JOIN l ON l.obj IS NOT DISTINCT FROM w.obj AND l.locktype=w.locktype AND NOT l.pid=w.pid AND l.granted  WHERE NOT w.granted ), tree AS ( SELECT l.locker pid, l.locker root, NULL::record obj, NULL AS mode, 0 lvl, locker::text path, array_agg(l.locker) OVER () all_pids FROM ( SELECT DISTINCT locker FROM pairs l WHERE NOT EXISTS (SELECT 1 FROM pairs WHERE waiter=l.locker) ) l  UNION ALL  SELECT w.waiter pid, tree.root, w.obj, w.mode, tree.lvl+1, tree.path||'.'||w.waiter, all_pids || array_agg(w.waiter) OVER () FROM tree JOIN pairs w ON tree.pid=w.locker AND NOT w.waiter = ANY ( all_pids )) SELECT (clock_timestamp() - a.xact_start)::interval(3) AS ts_age, replace(a.state, 'idle in transaction', 'idletx') state, (clock_timestamp() - state_change)::interval(3) AS change_age, a.datname,tree.pid,a.usename,a.client_addr,lvl, (SELECT count(*) FROM tree p WHERE p.path ~ ('^'||tree.path) AND NOT p.path=tree.path) blocked, repeat(' .', lvl)||' '||left(regexp_replace(query, 's+', ' ', 'g'),100) query FROM tree JOIN pg_stat_activity a USING (pid) ORDER BY path"

with SSHTunnelForwarder( (strSSHRelayHost, iSSHRelayPort), ssh_username=strSSHRelayUser, ssh_private_key=strSSHAuthKeyFile, local_bind_address=("localhost",55432), remote_bind_address=(dictHost.get('host'), dictHost.get('port'))) as server:
# Alternately, you can use password authentication
#with SSHTunnelForwarder( (strSSHRelayHost, iSSHRelayPort), ssh_username=strSSHRelayUser, ssh_password=strSSHRelayUserPass, local_bind_address=("localhost",55432), remote_bind_address=(dictHost.get('host'), dictHost.get('port'))) as server:
    if server is not None:
        server.start()
        server._check_is_started()
        #print("Tunnel server connected")
        params = {'database': dictHost.get('database'),'user': dictHost.get('username'),'password': dictHost.get('password'), 'host': server.local_bind_host, 'port': server.local_bind_port}
        conn = psycopg2.connect(**params)
        cursor = conn.cursor()
        cursor.execute(sqlQuery)
        column_names = [desc[0] for desc in cursor.description]
        print(column_names)
        rows = cursor.fetchall()
        for row in rows:
            print(row)
        cursor.close()
        if conn is not None:
            conn.close()
        server.stop()
    else:
        print("Unable to establish SSH tunnel")

Python List Joining

I’ve seen lists joined with a delimiter like this before:

strDelimiter = “\n”
strDelimiter.join(listOfStuff)

But it seems silly to allocate memory just for the delimiter … not a big deal from a resource perspective, and probably using the delimiter variable is more comprehensible in the future … but I’ve always wondered if you couldn’t just use a static string with the join method. Turns out you can —

msgContent.attach(MIMEText(“\n”.join(listOfStuff), ‘plain’))

 

Where did THAT come from? (PHP class)

I’ve got some code that is cobbled together from a couple of different places & it’s got namespace collisions that wouldn’t exist if I’d been starting from scratch. But I’ve got what I’ve got … and, occasionally, new code falls over because a class has already been declared.

Luckily, there’s a way to find out from where a class was loaded:

			$strClassName = "Oracle_Cred";
			if( class_exists($strClassName) ){
				$reflector = new \ReflectionClass($strClassName);
				echo "Class $strClassName was loaded from " . $reflector->getFileName();
			}
			else{
				echo "Class $strClassName does not exist yet";
			}