Category: Technology

New (To Me) WordPress Spam Technique

In the past week, one particular image that I posted has received about a hundred comments. Not real comments from people who enjoyed the image, unfortunately. Spam-bot comments. I get a few spam comments a month, easily just dropped. But exponentially increasing numbers of comments were showing up on this page. The odd thing, though, is it wasn’t a page or a post. It was an image embedded in a post.

Evidently embedded pictures have their own “attachment page” — a page that includes a comment dialogue. I guess that’s useful for someone … maybe an artist who uses a gallery front-end to their media can still get comments on their pictures if their gallery doesn’t provide commentary. Not a problem I need solved. WordPress includes a comments_open filter that allows you to programmatically control where comments are available (provided your theme uses the filter).

How do you add a function to WordPress? I find a lot of people editing WordPress or theme files directly. Not a good idea — next upgrade is going to blow your changes away. If you use an upgrade script, you could essentially ‘patch’ the theme during the upgrade process (append your function to the distributed file). Or you can just add your function as a plug-in. In your wp-content/plugins folder, make a folder with a good descriptive name of your plugin (i.e. don’t call it myPlugin if you have any thoughts of distributing it). In that folder make a PHP file with the same name (i.e. my filterCommentsByType folder has a filterCommentsByType.php file.

For what I’m doing, the comment header is longer than the code! The comment header is used to populate the Plugins page in your admin console. If you omit the header component, your plugin will not show up to be activated. Add your function and save the file:

<?php
/**
* Plugin Name: Filter Comments By Type
* Plugin URI: http://lisa.rushworth.us
* Description: This plugin allows commenting to be disabled based on post type
* Version: 1.0.0
* Author: Lisa Rushworth
* Author URI: http://lisa.rushworth.us
* License: GPL2
*/
add_filter( ‘comments_open’, ‘remove_comments_by_post_type’, 10 , 2 );
function remove_comments_by_post_type( $boolInitialStatus, $iPostNumber) {
$post = get_post($iPostNumber);
if( $post->post_type == ‘attachment’ ){ return false; }
else{ return $boolInitialStatus; }
}
?>

When you go to your admin console’s plugins section, your filter will appear in the list and be deactivated. Click to active it.

Voila, no more comments on attachment posts. Or whatever other type of post on which you wish to restrict commenting.

Exchange 2013 Calendar Events In OpenHAB (CalDAV)

We’ve wanted to get our Exchange calendar events into OpenHAB — instead of trying to create a rule to determine preschool is in session, the repeating calendar event will dictate if it is a break or school day. Move the gymnastics session to a new day, and the audio reminder moves itself. Problem is, Microsoft stopped supporting CalDAV.

Scott found DAVMail — essentially a proxy that can translate between CalDAV clients and the EWS WSDL. Installation was straight-forward (click ‘next’ a few times). Configuration — for Exchange 2013, you need to select the “EWS” Exchange protocol and use your server’s EWS WSDL URL. https://yourhost.domain.cTLD/ews/exchange.asmx … then enable a local CalDAV port.

On the ‘network’ tab, check the box to allow remote connections. You *can* put the thumbprint of the IIS web site server certificate for your Exchange server into the “server certificate hash” field or you can leave it blank. On the first connection through DAVMail, there will be a pop-up asking you to verify and accept the certificate.

On the ‘encryption’ tab, you can configure a private keystore to allow the client to communicate over SSL. I used a PKCS12 store (Windows type), but a java keystore should work too (you may need to add the key signing key {a.k.a. CA public key} to the ca truststore for your java instance).

On the advanced tab, I did not enable Kerberos because the OpenHAB CalDAV binding passes credentials. I did enable KeepAlive – not sure if it is used, the CalDAV binding seems to poll. Save changes and open up the DAVMail log viewer to verify traffic is coming through.

Then comes Scott’s part — enable the bindings in OpenHAB (there are two of them – a CalDAVIO and CalDAVCmd). In the caldavio.cfg, the config lines need to be prefixed with ‘caldavio’ even though that’s not how it works in OpenHAB2.

caldavio:CalendarIdentifier:url=https://yourhost.yourdomain.gTLD:1080/users/mailbox@yourdomain.gTLD/calendar
caldavio:CalendarIdentifier:username=mailbox@yourdomain.gTLD
caldavio:CalendarIdentifier:password=PasswordForThatMailbox
caldavio:CalendarIdentifier:reloadInterval=5
caldavio:CalendarIdentifier:disableCertificateVerification=true

Then in the caldavCommand.cfg file, you just need to tell it to load that calendar identifier:

caldavCommand:readCalendars=CalendarIdentifier

We have needed stop openhab, delete the config file from ./config/org/openhab/ related to this calendar and binding before config changes are ingested.

Last step is making a calendar item that can do stuff. In the big text box that’s where a message body is located (no idea what that’s called on a calendar entry):

BEGIN:Item_Name:STATE
END:Item_Name:STATE

The subject can be whatever you want. The start time and end time are the times for the begin and end events. Voila!

Setting Up DNSSEC

Last time I played around with the DNS Security Extensions (DNSSEC), the root and .com zones were not signed. Which meant you had to manually establish trusts before there was any sort of validation happening. Since the corporate standard image didn’t support DNSSEC anyway … wasn’t much point on either the server or client side. I saw ICANN postponed a key rollover for root a few days ago, and realized hey, root is signed now. D’oh, way to keep up, huh?

So we’re going to sign the company zones and make sure our clients are actually looking at zone signatures when they exist. Step #1 – signing our test zone. I do this in a screen session because it can take a long time to generate a key. If the process gets interrupted for whatever reason, you get to start ALL OVER. I am using ISC Bind – how to do this on any other platform, well LMGTFY 🙂

# Start a screen session
screen -S LJR-DNSSEC-KeyGen
# Use dnssec-keygen to create a zone signing key (ZSK) – bit value is personal preference
dnssec-keygen -a NSEC3RSASHA1 -b 2048 -n ZONE rushworth.us
# Then use dnssec-keygen to create a key signing key (KSK) – bit value is still personal preference
dnssec-keygen -f KSK -a NSEC3RSASHA1 -b 4096 -n ZONE rushworth.us

Grab the content of the *.key files and append them to your zone

Apache HTTP Sandbox With Docker

I set up a quick Apache HTTPD sandbox — primarily to test authentication configurations — in Docker today. It was an amazingly quick process.

Install an image that has an Apache HTTPD server:    docker pull httpd
Create a local file system for Apache config files (c:\docker\httpd\httpd.conf for main config, c:\docker\httpd\conf.d for all of the extras like ssl.conf and php.conf, plus web sites), and c:\docker\httpd\vhtml for the web site content)
Launch the container: docker run -detach –publish 80:80 –publish 443:443 –name ApacheWebServer –restart always -v /c/docker/httpd/httpd.conf:/etc/httpd/conf/httpd.conf:ro -v /c/docker/httpd/conf.d/:/etc/httpd/conf.d/:ro -v /c/docker/httpd/vhtml/:/var/www/vhtml/:ro httpd

Shell into it (docker exec -it ApacheWebServer bash) to look around, or just access http://localhost from the Docker host.

PPM Via Windows Authenticated Proxy

The office proxy used to use BASIC authentication. Which was terrible: transmission was done over clear text. Some years ago, they implemented a new proxy server that was capable of using Kerberos tickets for authentication (actually the old one could have done it too – I’ve set up the Kerberos realm on another implementation of the same product, but it wasn’t a straight forward clickity-click and you’re done). Awesome move, but it did break everything that used the HTTP_PROXY environment variable with creds included (yeah, I have a no-rights account with proxy access and put that in clear text all over the place). I just stopped using wget and curl to download files. I’d pull them to my Windows box, then scp them to the right place. But every once in a while I need a new perl module that’s available from ActiveState’s PPM. I’d have to fetch the tgz file and install it manually.

Until today — I was configuring a new Fiddler installation. Brilliant program – it’s just a web proxy that you can use for debugging purposes, but it can insert itself into HTTPS communications and provide clear text rendering of encrypted sessions too. It also proxies proxy credentials! There’s a config to allow remote hosts to connect – it’s normally bound to 127.0.0.1:8888, but it can bind to 0.0.0.0:8888 as well. If you have your web browser open & visit a site through the proxy server (i.e. you make sure the browser is authenticating fine) … set your HTTP_PROXY to http://127.0.0.1:8888 (or whatever means the specific program uses to configure a proxy). Voila, PPM hits Fiddler. Fiddler relays the request out to the proxy using the Kerberos token on your desktop. Package installs. Lot of overhead just to avoid unzipping a file … but if you are installing a package with a dozen dependencies … well, it’s a lot quicker than failing your install a dozen times and getting the next prereq!

PHP: Windows Authentication to MS SQL Database

I’ve encountered several people now how have followed “the directions” to allow their IIS-hosted PHP code to authenticate to a MS SQL server using Windows authentication … only to get an error indicating some unexpected ID is unable to log into the SQL server.

Create your application pool and add an identity. Turn off fastcgi.impersonate in your php.ini file. Create web site, use custom application pool … FAIL.

C:\Users\administrator.RUSHWORTH<%windir%\system32\inetsrv\appcmd.exe list config "Exchange Back End" /section:anonymousAuthentication
<system.webServer>
  <security>
    <authentication>
      <anonymousAuthentication enabled="true" userName="IUSR" />
    </authentication>
  </security>
</system.webServer>

The web site still doesn’t pick up the user from the application pool. Click on Anonymous Authentication, then click “Edit” over in the actions pane. Change it to use the application pool identity here too (why wouldn’t it automatically do so when an identity is provided?? no idea!).

C:\Users\administrator.RUSHWORTH<%windir%\system32\inetsrv\appcmd.exe list config "Exchange Back End" /section:anonymousAuthentication
<system.webServer>
  <security>
    <authentication>
      <anonymousAuthentication enabled="true" userName="" />
    </authentication>
  </security>
</system.webServer>

I’ve always seen the null string in userName, although I’ve read that the element may be omitted entirely. Once the site is actually using the pool identity, PHP can authenticate to SQL accounts using Windows authentication.

Facebook’s Offensive Advertising Profiles

As a programmer, I assumed Facebook used some sort of statistical analysis to generate advertising categories based on user input rather than employing a marketing group. A statistical analysis of the phrases being typed is *generally* an accurate reflection of what people type, although I’ve encountered situations where their code does not appropriately weight adjectives (FB thought I was a Trump supporter because incompetent, misogynist, unqualified, etc didn’t clue them into my real beliefs). But I don’t think the listings causing an uproar this week were factually wrong.
 
Sure, the market segment name is offensive; but computers don’t natively identify human offense. I used to manage the spam filtering platform for a large company (back before hourly anti-spam definition updates were a thing). It is impossible to write every iteration of every potentially offensive string out there. We would get e-mails for \/|@GR@! As such, there isn’t a simple list of word combinations that shouldn’t appear in your marketing profiles. It would be quite limiting to avoid ‘kill’ or ‘hate’ in profiles too — a group of people who hate vegetables is a viable target market. Or those who make killer mods to their car.
 
FB’s failing, from a development standpoint, is not having a sufficiently robust set of heuristic principals against which target demo’s are analysed for non-publication. They may have considered the list would be self-pruning: no company is going to buy ads to target “kill all women”. Any advertising string that receives under some threshold of buys in a delta-time gets dropped. Lazy, but I’m a lazy programmer and could *totally* see myself going down that path. And spinning it as the most efficient mechanism at that. To me, this is the difference between a computer science major and an information sciences major. Computer science is about perfecting the algorithm to build categories from user input and optimizing the results by mining purchase data to determine which categories are worth retaining. Information science teaches you to consider the business impact of customers seeing the categories which emerge from user input. 
 
There are ad demo’s for all sorts of other offensive groups, so it isn’t like the algorithm unfairly targeted a specific group. Facebook makes money from selling advertisements to companies based on what FB users talk about. It isn’t a specific attempt to profit by advertising to hate groups; it’s an attempt to profit by dynamically creating marketing demographic categories and sorting people into their bins.
 
The only thing that really offends me about this story is that unpleasant people are partaking in unpleasant conversations. Which isn’t news, nor is it really FB’s fault beyond creating a platform to facilitate the discussion. Possibly some unpleasant companies are targeting their ads to these individuals … although that’s not entirely FB’s fault either. Buy an ad in Breitbart and you can target a bunch of white supremacists.

Checking Supported TLS Versions and Ciphers

There have been a number of ssl vulnerabilities (and deprecated ciphers that should be unavailable, especially when transiting particularly sensitive information). On Linux distributions, nmap includes a script that enumerates ssl versions and, per version, the supported ciphers.

[lisa@linuxbox ~]# nmap -P0 -p 25 –script +ssl-enum-ciphers myhost.domain.ccTLD

Starting Nmap 7.40 ( https://nmap.org ) at 2017-10-13 11:36 EDT
Nmap scan report for myhost.domain.ccTLD (#.#.#.#)
Host is up (0.00012s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
25/tcp open smtp
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.1:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
|_ least strength: A

Nmap done: 1 IP address (1 host up) scanned in 144.67 seconds

Security Standards For Financial Information

A long time ago, processors of credit card information didn’t have any standards. And they’d lose your data. People didn’t like that, and some type of regulation had to be put on the industry. The credit card processors got together and formed an initiative to form their own regulations – PCI. They were a lot more concerned with the regulation’s impact on profitability than government regulations would have been. The PCI standards were fairly effective.

And now one of the credit bureaus has lost a huge amount of personal data – including social security numbers and account numbers that I don’t get why were stored in anything other than a one-way hash in the first place. But the bigger question is how are these credit bureaus able to operate with standards that are less strict than the industry-association generated PCI standards? My guess is that there will be a credit bureau industry association writing security standards in the next week or so. If there isn’t an industry association forming to ensure my social security number and account numbers aren’t stored in clear text on web-accessible servers at credit bureaus … I should hope the government would intervene and mandate a certain level of security.