Category: System Administration

Active Directory Federation Services (ADFS) Relying Party Trust Cert Expiry

At work, we received a critical ticket for an application that was unable to authenticate to ADFS. Nothing globally wrong – other applications are authenticating. A long call later, we discovered that the app’s certificate has expired. Why would the application not monitor their certificate expiry dates?? That’s an excellent question, but not one over which I have any control.

can monitor their certs on our side. So I wrote a quick powershell script to grab certificates from the relying party trusts and alerts us if any certs will be expiring in the next 30 days. It has to run on the ADFS server – I’d love to get it moved to the automation server in the future. I expect get-adfsrelyingpartytrust returns disabled agreements. I want to filter out disabled agreements.

Spectre & Meltdown

The academic whitepapers for both of these vulnerabilities can be found at https://spectreattack.com/ — or El Reg’s article and their other article provide a good summary for those not included to slog through technical nuances. There’s a lot of talk about chip manufacturer’s stock drops and vendor patches … but I don’t see anyone asking how bad this is on hosted platforms. Can I sign up for a free Azure trial and start accessing data on your instance? Even if they isolate free trial accounts (and accounts given to students through University relationships), is a potential trove of data worth a few hundred bucks to a hacker? Companies run web storefronts that process credit card info, so there’s potentially profit to be made. Hell, is the data worth a few million to some state-sponsored entity or someone getting into industrial espionage? I’m really curious if MS uses the same Azure farms for their hosted Exchange and SharePoint services.

While Meltdown has patches (not such a big deal if you’re use cases are GPU intensive games, but does a company want a 30% performance hit on business process servers, automated build and testing machines, data mining servers?), Spectre patches turn IT security into TSA regulations. We can make a patch to mitigate the last exploit that occurred. Great for everyone else, but doesn’t help anyone who experienced that last exploit. Or the people about to get hit with the next exploit.

I wonder if Azure and AWS are going to give customers a 5-30% discount after they apply the performance reducing patch? If I agreed to pay x$ for y processing capacity, now they’re supplying 0.87y … why wouldn’t I pay 0.87x$?

Ransomware

My company held a ransomware response through experiment recently – and, honestly, every ransomware response I’ve seen has been some iteration of “walk through backups until we find good files”. Maybe use something like the SharePoint versioning to help identify a good target date (although that date may be different for different files … who knows!). But why wouldn’t you attempt a proactive identification of compromised files?

The basis of ransomware is that it encrypts data and you get the password after paying so-and-so a bitcoin or three. Considering that NGO virus authors (e.g. those who aren’t trying to slow down Iran’s centrifuges) are generally interested in creating mayhem. There’s not a lot of disincentive to creating mayhem and making a couple of bucks. I don’t anticipate ransomware to become less prevalent in the future; in fact I anticipate seeing it in vigilante hacking: EntityX gets their files back after they publicly donate 100k to their antithesis organisation.

Since it’s probably not going away, it seems worthwhile to immediately identify the malicious data scrambling. Reverting to yesterday’s backups sucks, but not as much as finding that your daily backups have aged out and you’re stuck with the monthly backup from 01 Nov as your last “good” data set. It would also be good to merge whatever your last good backup is into the non-encrypted files so the only ‘stuff’ that reverts is a worthless scramble of data anyway. Sure someone may have worked on the file this morning and sucks for them to find their work back-rev’d to last night … but again that’s better than everyone having to reproduce their last two and a half months of work.

Promptly identifying the attack: There are routine processes that read changed files. Windows Search indexing, antivirus scanner, SharePoint indexing. Running against the Windows Search index log on every computer in the organisation is logistically challenging. Not impossible, but not ideal either. A central log for enterprise AV software or the SharePoint indexing log, however, can be parsed from the data centre. Scrape the log files for “unable to read this encrypted file” events. Then there are a myriad of actions that can be taken. Alert the file owner and have them confirm the file should be encrypted. Alert the IT staff when more than x encrypted files are identified in a unit time. Check the create time-stamp and alert the file owner for any files that were created prior to encountering them as encrypted.

Restoring only scrambled files: Since you have a list of encrypted files, you have a scope for the restore job. Instead of restoring everything in place (because who has 2x the storage space to restore to an alternate location?!). Restore just the recently identified as encrypted files – to an alternate location or in place. Ideally you’ve gotten user input on the encrypted files and can omit any the user indicated they encrypted too.

Scraping OpenHAB Karaf Console Data

Realized an easier way of scraping the Karaf console output – no need to SSH into the console (which, evidently, can timeout for inactivity … something I sort on my OpenSSH server with a config parameter whenever I’m looking to use tee and scrape output).

You can just pipe the startup script to tee. Have to push stderr into stdout to get the *errors* logged.

./start.sh 2>&1 | tee -a /tmp/logfile.txt

The output gets a little funky – maybe because of the color flags on some of the text? Dunno, but it’s grabbing the text and something like tail displays it without funky odd stuff

ESC[31m ESC[0m __ _____ ____ ESC[0m
ESC[31m ____ ____ ___ ____ ESC[0m/ / / / | / __ ) ESC[0m
ESC[31m / __ \/ __ \/ _ \/ __ \ESC[0m/ /_/ / /| | / __ | ESC[0m
ESC[31m/ /_/ / /_/ / __/ / / / ESC[0m__ / ___ |/ /_/ / ESC[0m
ESC[31m\____/ .___/\___/_/ /_/ESC[0m_/ /_/_/ |_/_____/ ESC[0m
ESC[31m /_/ ESC[0m 2.2.0-SNAPSHOTESC[0m
ESC[31m ESC[0m Build #1114 ESC[0m

Hit 'ESC[1m<tab>ESC[0m' for a list of available commands
and 'ESC[1m[cmd] --helpESC[0m' for help on a specific command.
Hit 'ESC[1m<ctrl-d>ESC[0m' or type 'ESC[1msystem:shutdownESC[0m' or 'ESC[1mlogoutESC[0m' to shutdown openHAB.

ESC[?1hESC=ESC[?2004hESC[36mopenhab>ESC[0m

But you get the java exceptions too:

      Exception in thread "pool-45-thread-5" java.lang.NullPointerException
              at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
              at com.zsmartsystems.zigbee.ZigBeeNode.setNeighbors(ZigBeeNode.java:510)
              at com.zsmartsystems.zigbee.ZigBeeNetworkMeshMonitor$2.run(ZigBeeNetworkMeshMonitor.java:232)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)

 

The Colloquial Occam’s Razor

Occam’s razor – it is futile to do with more things that which can be done with fewer – is colloquially rendered as “the simplest solution is the most likely”. We had multiple tickets opened today for authentication failures on an Apache web server. Each malfunctioning site uses LDAP authentication and authorization against an Oracle Unified Directory. Nothing in the error logs. The service account from the Apache configuration can log in and query the directory from the box using ldapsearch, so the account is valid and there is nothing in the OUD preventing access from this particular host.

That’s a puzzler, and I was about to take down a lot of web sites to reload the service with its log level set to debug. Not even sure what made me do it, but I went out to the groups and looked at their member lists. Oops. Something had gone wrong with the identity management platform and employee accounts had been cleared from the groups (all of the contractors were still members, which made it even stranger). Added a few people back into groups appropriate for their position, voila they could log into their site again.

No idea how the identity management group restored the memberships, but verifying people who should have been members (who had been members and had done nothing to remove their memberships) were actually members of the group saved a lot of time running through debug logs. Sometimes the simplest answer is the most likely.

Strange spam

We have been getting spam messages with the subject “top level quality of paint bucket” both at home and at work. I get that it costs essentially nothing to send a million junk e-mail messages, so it doesn’t take a lot of sales for a campaign to be profitable. But are there seriously people who buy their paint buckets from cold e-mails? Especially e-mails that I thought were trying to sell me buckets of paint.

And how lazy is a spam campaign that uses static strings in the subject field?

Load Runner And Statistical Analysis Thereof

I had offhandedly mentioned a statistical analysis I had run in the process of writing and implementing a custom password filter in Active Directory. It’s a method I use for most of the major changes we implement at work – application upgrades, server replacements, significant configuration changes.

To generate the “how long did this take” statistics, I use a perl script using the Time::HiRes module (_loadsimAuthToCentrify.pl) which measures microsecond time. There’s an array of test scenarios — my most recent test was Unix/Linux host authentication using pure LDAP authentication and Centrify authentication, so the array was fully qualified hostnames. Sometimes there’s an array of IDs on which to test — TestID00001, TestID00002, TestID00003, …., TestID99999. And there’s a function to perform the actual test.

I then have a loop to generate a pseudo-random number and select the test to run (and user ID to use, if applicable) using that number

my $iRandomNumber = int(rand() * 100);
$iRandomNumber = $iRandomNumber % $iHosts;
my $strHost = $strHosts[$iRandomNumber];

The time is recorded prior to running the function (my $t0 = [gettimeofday];) and the elapsed time is calculated when returning from the function (my $fElapsedTimeAuthentication = tv_interval ($t0, [gettimeofday]);). The test result is compared to an expected result and any mismatches are recorded.

Once the cycle has completed, the test scenario, results, and time to complete are recorded to a log file. Some tests are run multi-threaded and across multiple machines – in which case the result log file is named with both the running host’s name and a thread identifier. All of the result files are concatenated into one big result log for analysis.

A test is run before the change is made, and a new test for each variant of the change for comparison. We then want to confirm that the general time to complete an operation has not been negatively impacted by the change we propose (or select a route based on the best performance outcome).

Each scenario’s result set is dropped into a tab on an Excel spreadsheet (CustomPasswordFilterTiming – I truncated a lot of data to avoid publishing a 35 meg file, so the numbers on the individual tabs no longer match the numbers on the summary tab). On the time column, max/min/average/stdev functions are run to summarize the result set. I then break the time range between 0 and the max time into buckets and use the countif function to determine how many results fall into each bucket (it’s easier to count the number under a range and then subtract the numbers from previous buckets than to make a combined statement to just count the occurrences in a specific bucket).

Once this information is generated for each scenario, I create a summary tab so the data can be easily compared.

And finally, a graph is built using the lower part of that summary data. Voila, quickly viewed visual representation of several million cycles. This is what gets included in the project documentation for executive consideration. The whole spreadsheet is stored in the project document repository – showing our due diligence in validating user experience should not be negatively impacted as well as providing a baseline of expected performance should the production implementation yield user experience complaints.

 

New (To Me) WordPress Spam Technique

In the past week, one particular image that I posted has received about a hundred comments. Not real comments from people who enjoyed the image, unfortunately. Spam-bot comments. I get a few spam comments a month, easily just dropped. But exponentially increasing numbers of comments were showing up on this page. The odd thing, though, is it wasn’t a page or a post. It was an image embedded in a post.

Evidently embedded pictures have their own “attachment page” — a page that includes a comment dialogue. I guess that’s useful for someone … maybe an artist who uses a gallery front-end to their media can still get comments on their pictures if their gallery doesn’t provide commentary. Not a problem I need solved. WordPress includes a comments_open filter that allows you to programmatically control where comments are available (provided your theme uses the filter).

How do you add a function to WordPress? I find a lot of people editing WordPress or theme files directly. Not a good idea — next upgrade is going to blow your changes away. If you use an upgrade script, you could essentially ‘patch’ the theme during the upgrade process (append your function to the distributed file). Or you can just add your function as a plug-in. In your wp-content/plugins folder, make a folder with a good descriptive name of your plugin (i.e. don’t call it myPlugin if you have any thoughts of distributing it). In that folder make a PHP file with the same name (i.e. my filterCommentsByType folder has a filterCommentsByType.php file.

For what I’m doing, the comment header is longer than the code! The comment header is used to populate the Plugins page in your admin console. If you omit the header component, your plugin will not show up to be activated. Add your function and save the file:

<?php
/**
* Plugin Name: Filter Comments By Type
* Plugin URI: http://lisa.rushworth.us
* Description: This plugin allows commenting to be disabled based on post type
* Version: 1.0.0
* Author: Lisa Rushworth
* Author URI: http://lisa.rushworth.us
* License: GPL2
*/
add_filter( ‘comments_open’, ‘remove_comments_by_post_type’, 10 , 2 );
function remove_comments_by_post_type( $boolInitialStatus, $iPostNumber) {
$post = get_post($iPostNumber);
if( $post->post_type == ‘attachment’ ){ return false; }
else{ return $boolInitialStatus; }
}
?>

When you go to your admin console’s plugins section, your filter will appear in the list and be deactivated. Click to active it.

Voila, no more comments on attachment posts. Or whatever other type of post on which you wish to restrict commenting.

Setting Up DNSSEC

Last time I played around with the DNS Security Extensions (DNSSEC), the root and .com zones were not signed. Which meant you had to manually establish trusts before there was any sort of validation happening. Since the corporate standard image didn’t support DNSSEC anyway … wasn’t much point on either the server or client side. I saw ICANN postponed a key rollover for root a few days ago, and realized hey, root is signed now. D’oh, way to keep up, huh?

So we’re going to sign the company zones and make sure our clients are actually looking at zone signatures when they exist. Step #1 – signing our test zone. I do this in a screen session because it can take a long time to generate a key. If the process gets interrupted for whatever reason, you get to start ALL OVER. I am using ISC Bind – how to do this on any other platform, well LMGTFY 🙂

# Start a screen session
screen -S LJR-DNSSEC-KeyGen
# Use dnssec-keygen to create a zone signing key (ZSK) – bit value is personal preference
dnssec-keygen -a NSEC3RSASHA1 -b 2048 -n ZONE rushworth.us
# Then use dnssec-keygen to create a key signing key (KSK) – bit value is still personal preference
dnssec-keygen -f KSK -a NSEC3RSASHA1 -b 4096 -n ZONE rushworth.us

Grab the content of the *.key files and append them to your zone

Apache HTTP Sandbox With Docker

I set up a quick Apache HTTPD sandbox — primarily to test authentication configurations — in Docker today. It was an amazingly quick process.

Install an image that has an Apache HTTPD server:    docker pull httpd
Create a local file system for Apache config files (c:\docker\httpd\httpd.conf for main config, c:\docker\httpd\conf.d for all of the extras like ssl.conf and php.conf, plus web sites), and c:\docker\httpd\vhtml for the web site content)
Launch the container: docker run -detach –publish 80:80 –publish 443:443 –name ApacheWebServer –restart always -v /c/docker/httpd/httpd.conf:/etc/httpd/conf/httpd.conf:ro -v /c/docker/httpd/conf.d/:/etc/httpd/conf.d/:ro -v /c/docker/httpd/vhtml/:/var/www/vhtml/:ro httpd

Shell into it (docker exec -it ApacheWebServer bash) to look around, or just access http://localhost from the Docker host.