Category: Technology

PHP: Windows Authentication to MS SQL Database

I’ve encountered several people now how have followed “the directions” to allow their IIS-hosted PHP code to authenticate to a MS SQL server using Windows authentication … only to get an error indicating some unexpected ID is unable to log into the SQL server.

Create your application pool and add an identity. Turn off fastcgi.impersonate in your php.ini file. Create web site, use custom application pool … FAIL.

C:\Users\administrator.RUSHWORTH<%windir%\system32\inetsrv\appcmd.exe list config "Exchange Back End" /section:anonymousAuthentication
<system.webServer>
  <security>
    <authentication>
      <anonymousAuthentication enabled="true" userName="IUSR" />
    </authentication>
  </security>
</system.webServer>

The web site still doesn’t pick up the user from the application pool. Click on Anonymous Authentication, then click “Edit” over in the actions pane. Change it to use the application pool identity here too (why wouldn’t it automatically do so when an identity is provided?? no idea!).

C:\Users\administrator.RUSHWORTH<%windir%\system32\inetsrv\appcmd.exe list config "Exchange Back End" /section:anonymousAuthentication
<system.webServer>
  <security>
    <authentication>
      <anonymousAuthentication enabled="true" userName="" />
    </authentication>
  </security>
</system.webServer>

I’ve always seen the null string in userName, although I’ve read that the element may be omitted entirely. Once the site is actually using the pool identity, PHP can authenticate to SQL accounts using Windows authentication.

Facebook’s Offensive Advertising Profiles

As a programmer, I assumed Facebook used some sort of statistical analysis to generate advertising categories based on user input rather than employing a marketing group. A statistical analysis of the phrases being typed is *generally* an accurate reflection of what people type, although I’ve encountered situations where their code does not appropriately weight adjectives (FB thought I was a Trump supporter because incompetent, misogynist, unqualified, etc didn’t clue them into my real beliefs). But I don’t think the listings causing an uproar this week were factually wrong.
 
Sure, the market segment name is offensive; but computers don’t natively identify human offense. I used to manage the spam filtering platform for a large company (back before hourly anti-spam definition updates were a thing). It is impossible to write every iteration of every potentially offensive string out there. We would get e-mails for \/|@GR@! As such, there isn’t a simple list of word combinations that shouldn’t appear in your marketing profiles. It would be quite limiting to avoid ‘kill’ or ‘hate’ in profiles too — a group of people who hate vegetables is a viable target market. Or those who make killer mods to their car.
 
FB’s failing, from a development standpoint, is not having a sufficiently robust set of heuristic principals against which target demo’s are analysed for non-publication. They may have considered the list would be self-pruning: no company is going to buy ads to target “kill all women”. Any advertising string that receives under some threshold of buys in a delta-time gets dropped. Lazy, but I’m a lazy programmer and could *totally* see myself going down that path. And spinning it as the most efficient mechanism at that. To me, this is the difference between a computer science major and an information sciences major. Computer science is about perfecting the algorithm to build categories from user input and optimizing the results by mining purchase data to determine which categories are worth retaining. Information science teaches you to consider the business impact of customers seeing the categories which emerge from user input. 
 
There are ad demo’s for all sorts of other offensive groups, so it isn’t like the algorithm unfairly targeted a specific group. Facebook makes money from selling advertisements to companies based on what FB users talk about. It isn’t a specific attempt to profit by advertising to hate groups; it’s an attempt to profit by dynamically creating marketing demographic categories and sorting people into their bins.
This isn’t limited to Facebook either – any scenario where it is possible to make money but costs nothing to create entries for sale … someone will write an algorithm to create passive income. Why WOULDN’T they? You can sell shirts on Amazon. Amazon’s Marketplace Web Service allows resellers to automate product listings. Custom write some code to insert random (adjectives | nouns | verbs) into a template string then throw together a PNG of the logo superimposed on a product. Have a production facility with an API to order, make the product once it has been ordered, and you’ve got passive income. And people did. I’m sure some were wary programmers – a sufficiently paranoid person might even have a human approve the new list of phrases. Someone less paranoid might make a banned word list (or even a banned word list and source one’s words from a dictionary and look for the banned words in the definition too). But a poorly conceived implementation will just glom words together and assume something stupid/offensive just won’t sell. Works that way sometimes. Bad publicity sinks the company other times.
 
The only thing that really offends me about this story is that unpleasant people are partaking in unpleasant conversations. Which isn’t news, nor is it really FB’s fault beyond creating a platform to facilitate the discussion. Possibly some unpleasant companies are targeting their ads to these individuals … although that’s not entirely FB’s fault either. Buy an ad in Breitbart and you can target a bunch of white supremacists too. Not creating a marketing demographic for them doesn’t make the belief disappear. 

Checking Supported TLS Versions and Ciphers

There have been a number of ssl vulnerabilities (and deprecated ciphers that should be unavailable, especially when transiting particularly sensitive information). On Linux distributions, nmap includes a script that enumerates ssl versions and, per version, the supported ciphers.

[lisa@linuxbox ~]# nmap -P0 -p 25 –script +ssl-enum-ciphers myhost.domain.ccTLD

Starting Nmap 7.40 ( https://nmap.org ) at 2017-10-13 11:36 EDT
Nmap scan report for myhost.domain.ccTLD (#.#.#.#)
Host is up (0.00012s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
25/tcp open smtp
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.1:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
|_ least strength: A

Nmap done: 1 IP address (1 host up) scanned in 144.67 seconds

Security Standards For Financial Information

A long time ago, processors of credit card information didn’t have any standards. And they’d lose your data. People didn’t like that, and some type of regulation had to be put on the industry. The credit card processors got together and formed an initiative to form their own regulations – PCI. They were a lot more concerned with the regulation’s impact on profitability than government regulations would have been. The PCI standards were fairly effective.

And now one of the credit bureaus has lost a huge amount of personal data – including social security numbers and account numbers that I don’t get why were stored in anything other than a one-way hash in the first place. But the bigger question is how are these credit bureaus able to operate with standards that are less strict than the industry-association generated PCI standards? My guess is that there will be a credit bureau industry association writing security standards in the next week or so. If there isn’t an industry association forming to ensure my social security number and account numbers aren’t stored in clear text on web-accessible servers at credit bureaus … I should hope the government would intervene and mandate a certain level of security.

Equifax Hack

First of all, saying half the population of the United States has had their personal information stolen might be accurate, but it’s the good marketing spin. 2016 numbers had 249,485,228 adults in the United States. That’s 57% of people over 18 who have had their personal data stolen. Now there are people with no credit history. It’s a bit of a thing when you first want to rent a flat or get a credit card … you have no credit history, and can’t get credit until you have one. Last I read, it was something like 14% of adults who have no credit record — meaning Equifax gave up information on 66% of the credit-having population.

Leaving aside the marketing spin on numbers, though, why the hell is a credit bureau storing my personal information in a retrievable format instead of a one-way hash? Performance, I assume … so I guess my question really is why were a couple of clock cycles considered more important than the security of my data? Some of the data is probably maintained in clear text because they use heuristic matching to link incoming data to entities. I’m guessing my info comes in with a name, address, creditor name, and account number. And they’ve got to be able to match up the thirty different iterations of my address to ingest the data. But there’s no reason for the account number to be stored unhashed – store the last two or three digits in a new column for display (Your XYZ account ending in ###). And there’s sure as hell no reason for the SSN to be stored unhashed – even if they’d have to store the full one hashed and the last four in another hash because some data doesn’t come in with full SSNs.

ZoneMinder After Upgrade

We recently updated from ZoneMinder 1.30 to 1.34 – easy as can be, ran the DB update script and everything came right online. Except … our home automation system hasn’t been able to access the system. OpenHAB reports that the bridge is offline. And we’re getting 404 errors in all of the /zm/api calls in access_log.

Turns out the API was offline because when the new package came down … there was a zoneminder.conf.rpmnew in the Apache conf.d directory. Can’t even say I found this intentionally – I wanted to check the Apache config file to see if it had anything about the api directory, did a directory listing, and said oooooh!

[lisa@fedora01 conf.d]# ll zone*
-rw-r–r– 1 root root 1990 Jul 29 18:13 zoneminder.conf
-rw-r–r– 1 root root 1990 Aug 28 22:34 zoneminder.conf.rpmnew

They’ve changed a few of the sub-directories and added components to the config. As soon as I renamed zoneminder.conf to zoneminder.conf.old, copied zoneminder.conf.rpmnew to zoneminder.conf, and repeated a few config tweaks we had made for the original installation … restarted Apache and voila, we can fetch /zm/api/host/getVersion.json and get values. So if you’re getting odd 404 errors and CakePHP “/zm/api” not found errors maybe you forgot to update your config with changes from the rpmnew file.

Cleaning Up Old OpenHAB Persistence Tables

So my husband asked for a program that would go out to the OpenHAB persistence database and identify all of the item tables that are no longer associated with active items. If you rename or delete an item from OpenHAB, the associated data is retained in the persistence database. Might be a good thing – maybe you wanted that data. But if it’s useless fluff … well, no need to keep the state changes from a door sensor that’s no longer around.

Wrote the code, and asked him how many days old he wanted the last update to be before the item table got dropped … and he told me this was a useless way to do it and maybe something really hadn’t updated in six months or three years and age of last update is no way to be identifying tables to be removed. Which, yeah, then why ask for it!? Then I needed to write something that takes a list of items from OpenHAB and identifies everything in the items table that does not appear in the OpenHAB list so those tables can be deleted. But I figured I’d post the original code too in case anyone else could use it. Both in perl, and neither in particularly well written perl. I trust the data and don’t want to protect against insertion attacks.

Drop tables for items that no longer appear in OpenHAB:

use strict;
use DBI;

my %strItemsFromOpenHAB = ();
open(INPUT,"./openhabItemList.txt");
while(<INPUT>){
        chomp();
        my $strCurrentItem = $_;
        $strItemsFromOpenHAB{$strCurrentItem}++;
}
close INPUT;

my $dbh = DBI->connect('DBI:mysql:openhabdb;host=DBHOST', 'DBUID', 'DBPassword', { RaiseError => 1 } );

my $sth = $dbh->prepare("SELECT * FROM items");
$sth->execute();
while (my @row = $sth->fetchrow_array) {
        my $strItemID = $row[0];
        my $strItemName = $row[1];
        if(! $strItemsFromOpenHAB{$strItemName} ){              # If the current item name is not in the list of items from OpenHAB
#               print "DELETE FROM items where ItemID = $strItemID\n";
                print "DROP TABLE Item$strItemID;  # $strItemName \n";
        }
}
$sth->finish();

$dbh->disconnect();
close OUTPUT;

 

Identify tables that have not been updated in iTooOldInDays days:

use strict;
use DBI;
use Date::Parse;
use Time::Local;

my $iTooOldInDays = 365;

my $iCurrentEpochTime = time();

my @strItems = ();
my $iItems = 0;

my $dbh = DBI->connect('DBI:mysql:openhabdb;host=DBHOST', 'DBUID', 'DBPassword', { RaiseError => 1 } );

my $sth = $dbh->prepare("SELECT * FROM Items");
$sth->execute();
while (my @row = $sth->fetchrow_array) {
        $strItems[$iItems++] = $row[0];
}
$sth->finish();

for(my $i = 0; $i < $iItems; $i++){ my $strTableName = 'Item' . $strItems[$i]; my $sth = $dbh->prepare("SELECT * FROM $strTableName ORDER BY Time DESC LIMIT 1");
        $sth->execute();
        while (my @row = $sth->fetchrow_array) {
                my $strUpdateTime = $row[0];
                my @strDateTimeBreakout = split(/ /,$strUpdateTime);
                my $strDate = $strDateTimeBreakout[0];
                my $strTime = $strDateTimeBreakout[1];

                my @strDateBreakout = split(/-/,$strDate);
                my @strTimeBreakout = split(/:/,$strTime);

                my $iUpdateEpochTime = timelocal($strTimeBreakout[2],$strTimeBreakout[1],$strTimeBreakout[0], $strDateBreakout[2],$strDateBreakout[1]-1,$strDateBreakout[0]);
                my $iTableAge = $iCurrentEpochTime - $iUpdateEpochTime;

                if($iTableAge > ($iTooOldInDays * 86400) ){
                        print "$strTableName last updated $strUpdateTime - $iUpdateEpochTime\n";
                }
        }
        $sth->finish();
}

$dbh->disconnect();
close OUTPUT;

Configuring and Using RPZ

I realized today what, while I had written about why response policy zones are useful, I never indicated how to configure one! So here’s a quick document outlining how to set it up in ISC Bind. In your named.conf file, add a response policy to your options section:

        response-policy {
                zone “rpz”;
        };
Then add the correspondingly named zone at the end of the file. For purposes of testing, I added a zone as a forward only zone so I could perform a network capture to see what exactly transpires when a name in the RPZ is resolved.
zone “rpz” {
      type master;
      file “rpz.db”;
      allow-query { none; };
      allow-transfer { none; };
};
zone “windstream.com” {
    type forward;
    forward only;
    forwarders { 8.8.8.8; };
};
Then you just have to make a rpz.db where you store your named files:
$TTL 60
$ORIGIN rpz.
@            IN    SOA  localhost. root.localhost.  (
                          2   ; serial
                          3H  ; refresh
                          1H  ; retry
                          1W  ; expiry
                          1H) ; minimum
                  IN    NS    localhost.

www.windstream.com    CNAME    www.yahoo.com.
Restarted named and ran “rndc flush” to avoid serving cached content instead of the RPZ host data. Then ran a few tests and confirmed that the resolution configured in the rpz zone:
[lisa@fedora02 named]# dig +short www.windstream.com @localhost
www.yahoo.com.
atsv2-fp.wg1.b.yahoo.com.
98.139.183.24
98.138.252.30
98.139.180.149
98.138.253.109
[lisa@fedora02 named]# dig +short dell905.windstream.com @localhost
ns4.windstream.com.
173.186.244.139
[lisa@fedora02 named]# dig +short www.google.com @localhost
216.58.218.228
In this process, I learnt something interesting about ICS’s implementation of RPZ: it still performs the query and then overrides the results. Odd waste of cycles, but the resolution that was subsequently turned into yahoo’s address from the rpz zone. Looking up a windstream.com host that isn’t in my RPZ and I got another query out to 8.8.8.8 which was expected. Query to something not in the forward zone and not in the rpz zone and I get no traffic to 8.8.8.8 (because it follows my normal forwarding which is to our ISP’s DNS).
I was curious if this meant rpz could not be used to publish a bad hostname locally – but attempting to resolve a bad hostname (added abadhost.windstream.com with the same CNAME to Yahoo and reloaded my zone) worked just fine.

[root@fedora02 ~]# dig abadhost.windstream.com @localhost

; <<>> DiG 9.11.1-P2-RedHat-9.11.1-2.P2.fc26 <<>> abadhost.windstream.com @localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8382
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 4, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 1aa34751c5df7f78857a921259a8706fb5e1741a46eb5352 (good)
;; QUESTION SECTION:
;abadhost.windstream.com. IN A

;; ANSWER SECTION:
abadhost.windstream.com. 5 IN CNAME www.yahoo.com.
www.yahoo.com. 1800 IN CNAME atsv2-fp.wg1.b.yahoo.com.
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.139.180.149
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.138.253.109
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.139.183.24
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.138.252.30

;; AUTHORITY SECTION:
wg1.b.yahoo.com. 172800 IN NS yf3.a1.b.yahoo.net.
wg1.b.yahoo.com. 172800 IN NS yf4.a1.b.yahoo.net.
wg1.b.yahoo.com. 172800 IN NS yf1.yahoo.com.
wg1.b.yahoo.com. 172800 IN NS yf2.yahoo.com.

;; ADDITIONAL SECTION:
yf1.yahoo.com. 86400 IN A 68.142.254.15
yf2.yahoo.com. 86400 IN A 68.180.130.15

;; Query time: 1204 msec
;; SERVER: ::1#53(::1)
;; WHEN: Thu Aug 31 16:24:15 EDT 2017
;; MSG SIZE rcvd: 315

But there is a query that goes out to the name server and a ‘no such name’ result returned. Odd.

DMARC and DKIM

Microsoft’s latest security newsletter included the fact that more than 90% of Fortune 500 companies have not fully implemented DMARC. Wow — that’s something I do at home! Worse still, the Fortune 500 company for which I work is in that 90% … a fact I hope to rectify this week. SPF is just some DNS entries that indicate the source IPs that are expected to be sending email from your domain. Lots of SPF record generators online.

DKIM is a little more involved, but it’s a lot easier now that packages for DKIM are available on Linux distro repositories. You still *can* build it from source, but it’s easier to install the OpenDKIM package.

Once the package is installed, generate the key(s) to be used with your domain(s).

cd /etc/opendkim/keys/
openssl genrsa -out dkim.private 2048
openssl rsa -in dkim.private -out dkim.public -pubout -outform PEM
# secure private key file
chown opendkim:opendkim dkim.private
chmod go-r dkim.private

Decide on the selector you are using — I use ‘mail’ as my selector. At work, I use ‘2017Q3Key’ — this allows us to change to a new key without in-transit mail being impacted. Old mail was sent with the 2017Q2 selector and *that* public key is in DNS. New mail comes across with 2017Q3 and uses the new DNS record to verify. I do *not* share these keys – anyone else sending mail from our domain needs to generate their own key (or I make one for them), use their own unique selector, and I will create the DNS records for their selector. When marketing engages a third party to send e-mails on our behalf, we have a 2017VendorName selector too.

Edit /etc/opendkim.conf. The socket line is not necessary – I just tend away from default ports as a habit. Since it’s bound to localhost, not such a big deal.

Mode sv
Socket   inet:8895@localhost
Selector mail
KeyFile /etc/opendkim/keys/dkim.private
KeyTable /etc/opendkim/KeyTable
SigningTable refile:/etc/opendkim/SigningTable
InternalHosts refile:/etc/opendkim/TrustedHosts

There’s a config option to “SendReports” — it’s a boolean that indicates if you want your system to send failure reports when the sender indicates they want such reports and provide a reporting address. Especially for testing purposes, I recommend indicating your domain wants reports — it is helpful in case you’ve got something configured not quite right and are failing delivery on some messages. As such, configure my installation to send reports. It’s additional overhead in cases where verification fails; I don’t see all that many failures, and it isn’t a lot of extra load. Since I know my installation will send detailed failure information, I can use my domain when testing new implementations.

Once you have the base configuration set, edit /etc/opendkim/SigningTable and add your domain(s) and the appropriate selector

*@rushworth.us mail._domainkey.rushworth.us
*@lisa.rushworth.us mail._domainkey.lisa.rushworth.us
*@scott.rushworth.us mail._domainkey.scott.rushworth.us
*@anya.rushworth.us mail._domainkey.anya.rushworth.us

Edit /etc/opendkim/KeyTable and map each selector from the SigningTable to a key file

mail._domainkey.rushworth.us rushworth.us:default:/etc/opendkim/keys/dkim.private
mail._domainkey.lisa.rushworth.us lisa.rushworth.us:default:/etc/opendkim/keys/lisa.dkim.private
mail._domainkey.scott.rushworth.us scott.rushworth.us:default:/etc/opendkim/keys/scott.dkim.private
mail._domainkey.anya.rushworth.us anya.rushworth.us:default:/etc/opendkim/keys/anya.dkim.private

Edit /etc/opendkim/TrustedHosts and add the internal IPs that relay your domain’s mail through the server (IP addresses or subnets)

Create DNS TXT records – the part after p= is the content of the public key file for that selector. When you are first setting up DKIM, use t=y (yes, we are just testing this). Once you confirm everything is functional, you can change to y=n (nope, really pay attention to our DKIM signature and policy). The policy is an individual preference. I use ‘all’ (all mail from my domain will be signed) and “o=-” (again all mail from my domain will be signed). You can use “o=~” (some mail from my domain is signed, some isn’t … who knows) and “dkim=unknown” (again, some is signed). You can use “dkim=discardable” (don’t just consider the message as more likely to be spam if it is not signed … you can outright drop the message). As a business, I don’t use this *just in case*. Something crazy happens – the dkim service falls over, your key gets mangled – and receiving parties can start dropping your messages. Using “dkim=all” means they are more apt to quarantine them as spam, but someone can go and get the messages. And hopefully notice something odd is happening.

mail._domainkey.domain.tld  TXT k=rsa;t=y;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzTnpc7tHfyH1zgT3Jx/JHmGSz8WCy1jvzu5QsYvDBmimKEHRY4Kz4mya5bOYsDQuJ/sz+BJo6xDwsUXCuyEkykIlgqP+7E9oK2EcW0dZms87SGmNEnNBN5iTe0pdzk1lXx2js3QdOWswO+cmA9F1Z8OzSR+2u79huugPFBHl79zFvOEHbigrmeHEfo0KHWpeNomf/xKx+wyYr1n3R5gS+28CeC3abSyKgmaYYRLoZsjrCLbEM0m2YPJRKd1ZGOObBMa4PZWj7pT07ISEjoNnXQ27BtcL/QjKKeLkbJ0UGEOSdPEJKuEpAUvYU9lA5hbtzrqiwdlPxWYocDVPrcqAHwIDAQAB
_adsp._domiankey.domain.tld TXT dkim=all
_domainkey.rushworth.us TXT t=y;o=-;r=dkim@lisa.rushworth.us
_ssp._domainkey.rushworth.us TXT t=y;dkim=all

 

Edit /etc/mail/sendmail.mc (using the port defined in /etc/opendkim.conf

INPUT_MAIL_FILTER(`dkim-filter’, `S=inet:8895@localhost’)

Make your sendmail.mc to sendmail.cf and verify that you’ve got the dkim-filter line

Xdkim-filter, S=inet:8895@localhost

Start opendkim, then restart sendmail. Now test it — inbound mail should have *their* DKIM signatures verified, outbound mail should be signed with the appropriate key.

Once you have verified your DKIM is functioning properly — well, first of all you can update your DNS records to remove testing mode. Then create your DMARC record:

_dmarc.rushworth.us     v=DMARC1; p=quarantine; sp=quarantine; rua=mailto:dmarc-rua@lisa.rushworth.us!10m; ruf=mailto:dmarc-ruf@lisa.rushworth.us!10m; rf=afrf; pct=100; ri=172800
 Again, you don’t need to use quarantine — ‘reject’ would recommend mail be dropped or ‘none’ recommends no action (good for testing). The rua (aggregate reporting email address) and ruf (address to recieve failing samples for analysis) should be in your domain.
You could add either/both “adkim=s” or “aspf=s” to indicate your DKIM or SPF adhere to strict standards. I use relaxed (default, do not need to specify it in the TXT record).
If you want the reports delivered to an address outside of your domain, that domain needs to publish a DNS record authorizing receipt of the reports:
     rushworth.us._report._dmarc.lisa.rushworth.us     v=DMARC1