Category: Technology

Sendmail Mailertable

Mailertable (/etc/mail/mailertable)

Routing information for external delivery. Functionally, these are like the SMTP Connectors within Exchange. The mailertable entries can override everything including smarthost definitions. This is required for internal mail routing – our sendmail servers should not transmit email for @windstream.com to the MX records but rather the destination we intend. We also use mailertable entries to force B2B communication over internal secured channels.

If a server is unable to deliver mail to a specific domain (e.g. one of our public IP addresses gets blacklisted), a mailertable entry can be used to direct all mail destined for the domain through one of our servers still able to make delivery.

The file contains two columns, domains and actions. Domains can be ends-with substring matches:
.anythingfromthisdomain.com

Will match @thishost.anythingfromthisdomain.com as well as @thathost.anythingfromthisdomain.com. Domains can also be a full match of the right-hand side of the email address:
justthisemaildomain.com

Which will match @justthisemaildomain.com. The most “accurate” match will win, not just the first match in line. So if your file contains the following:
.mysampledomain.com relay:[10.10.10.10]
thishost.mysampledomain.com relay:[20.20.20.20]

Mail destined for thishost.mysampledomain.com will be sent to 20.20.20.20

Actions contain both a mailer and a host. The mailer can redirect messages to local users:
.egforlocaldelivery.com local:username

Or it can force an error response:
Baddomain.com error:5.3.0:Unknown User

Our use of the mailertable, though, is to redirect mail destined for the domain:
windstream.com relay:[twnexchinbound.windstream.com]
newacquisition.com relay:[theirinternalhost.theirdomain.com]

In these cases, the square brackets around the destination override the MX record. To reroute a domain’s delivery destination, then, it is imperative that the host be enclosed in square brackets.

To commit changes to the file, either use “make” from within /etc/mail to commit all changes or the following command to commit just the changes to mailertable:
makemap hash /etc/mail/mailertable < /etc/mail/mailertable

Sendmail Configuration

Sendmail Configuration Files – sendmail.cf and sendmail.mc

 

Sendmail configuration files are located by default in /etc/mail/.  PureMessage uses /opt/pmx4/sendmail/etc/mail/

 

The main configuration file is sendmail.cf.  This is a rather cryptic file which we will not configure directly.  If you want to know the syntax for sendmail.cf, read the doc at http://www.sendmail.org or get the O’Reily book.  This information is specific to the MC file from which a macro builds the CF file..

 

sendmail.mc contains instructions to allow the M4 macro processor to build sendmail.cf.  Very important, before you can use a macro to create a sendmail.cf file, you need to have the macro installed.  This is the sendmail package sendmail-cf.  To ascertain if the package has been installed on RedHat:

 

[root@LJLLX001 mail]# rpm -qa | grep sendmail

sendmail-8.13.1-2

sendmail-cf-8.13.1-2

 

Both sendmail and sendmail-cf packages should appear in the results.  If you do not have the CF package, install it.

 

The text “dnl” within sendmail.mc denotes a comment – like a tic in VisualBasic or a hash in perl.  Many lines end with dnl, or dnl with some type of commentary.  Lines beginning with dnl are not processed.

 

Common instructions within a sendmail.mc file:

 

include(`/usr/share/sendmail-cf/m4/cf.m4′)dnl

This line refers the m4 utility to the correct “translation” to build the sendmail.cf file.  Important that the line is at the top of the mc file, but nothing to do with sendmail configuration specifically

 

VERSIONID(`setup for Red Hat Linux’)dnl

This line is not required, and we have ‘junk’ in it frequently.  It records the version of sendmail in the cf file for administrative reference.

 

OSTYPE(`linux’)dnl

More instructions for m4, different OS’s have different locations for sendmail files and the OS defined here identifies which parameters to use.  This line again needs to be at the top of the mc file

 

define(`confDEF_USER_ID’,“8:12”)dnl

Defines which user and group sendmail will run as – do NOT pick root here.  User id 8 (mail) and group id 12 (mail) from /etc/passwd and /etc/groups respectively.

 

define(`confTO_CONNECT’, `1m’)dnl

Time limit for SMTP connection timeout, set to one minute normally.  This is how long your server will wait for an initial connect() to complete.

 

define(`confTRY_NULL_MX_LIST’,true)dnl

Email is normally routed by MX records.  This instruction means the ‘domain’ can also be a host name with no MX defined.  E.G.  sending email to @windstream.com will return the MX records, as they exist.  Attempting to email @neohtwnlx810.windstream.com will return no MX records, but LX810 will be contacted directly to attempt delivery.  This is a most useful instruction for return delivery to system mailers.

 

define(`confDONT_PROBE_INTERFACES’,true)dnl

The sendmail class w lists the host and IP addresses for which sendmail accepts and takes local delivery.  This class can be automatically populated, or using this directive not automatically populated.  We configure this information manually in other files.

 

You can use a sendmail command line to determine what is set to various system variables:

[root@LJLLX001 ~]# sendmail -d0.1 -bv

Version 8.13.1

Compiled with: DNSMAP HESIOD HES_GETMAILHOST LDAPMAP LOG MAP_REGEX

MATCHGECOS MILTER MIME7TO8 MIME8TO7 NAMED_BIND NETINET NETINET6

NETUNIX NEWDB NIS PIPELINING SASLv2 SCANF STARTTLS TCPWRAPPERS

USERDB USE_LDAP_INIT

============ SYSTEM IDENTITY (after readcf) ============

(short domain name) $w             =      LJLLX001

(canonical domain name) $j        =      LJLLX001.vibiant.dnsalias.com

(subdomain name) $m                 =      vibiant.dnsalias.com

(node name) $k                             =      LJLLX001.vibiant.com

========================================================

 

define(`PROCMAIL_MAILER_PATH’,`/usr/bin/procmail’)dnl

Exactly what it says – the location of procmail

 

define(`ALIAS_FILE’, `/etc/aliases’)dnl

Location of the file for local delivery aliases – not something we use often as there are few local delivery accounts.  In the ISP, this file can be used to give someone additional addresses which deliver to the same mailbox.  This file can also be used to direct delivery of a local account to a program – in PureMessage for example, /opt/pmx4/sendmail/etc/mail/aliases directs the pmx-auto-approve address to the application which releases user messages.

 

define(`confBIND_OPTS’, `WorkAroundBrokenAAAA’)dnl

This is a resolver option, it instructs sendmail to ignore SERVFAIL errors during an IPv6 lookup.  We had a few domains for which we could not deliver mail without this directive.

 

define(`SMART_HOST’, `[192.168.1.53]’)

A smart host can be used instead of direct mail delivery.  For a server which is not meant to deliver mail to the internet (neohtwnlx824 for instance) the smart_host directive sends all mail to the defined destination.  The destination can be a hostname or an IP address.  Note, the mailertable will override the smarthost.

 

define(`STATUS_FILE’, `/var/log/mail/statistics’)dnl

Retains statistical information on server – use the command mailstats to output the statistics, the file created here is not text

 

define(`UUCP_MAILER_MAX’, `2000000′)dnl

Maximum size for messages relayed by UUCP mailers

 

define(`confPRIVACY_FLAGS’, `authwarnings,novrfy,noexpn,restrictqrun’)dnl

Disables unwanted commands – usually for security reasons.  EXPN expands groups into component members, for instance, so NOVRFY is used to disable the command.  Some of these are more important if local delivery is handled by the sendmail server.

 

define(`confAUTH_OPTIONS’, `A’)dnl

What kinds of authentication are supported by the server.  Useful if you are requiring authentication to relay mail, we do not do this.  Some UNIX hosts get confused if AUTH is an option made available, and you need to remark this line out of the mc file.

 

define(`confTO_QUEUEWARN’, `6d’)dnl

If you ever see an email from a destination mail server saying it is still trying to deliver your message and just wanted to let you know – that is what this interval defines.  To truly adhere to RFC specifications, a sendmail server should continue to attempt delivery for at least four to five days.  As a “nice” feature, the server can send periodic notifications to the sender that delivery has been delayed.  This standard comes from a time when circuits were smaller and quite lossy.  It could reasonably take days to establish a connection to the destination and transmit a message.

We are rogue and just return mail as undeliverable after a shorter period.  No reason to notify users, but to ensure that a notification is not sent, we put the warning interval at something higher than the expiration interval.

 

define(`confTO_QUEUERETURN’, `12h’)dnl

Related to the QUEUEWARN interval – this is the period after which the sendmail server considers the message undeliverable and returns it to the sender.  By default, this is five days so we make sure to define something more reasonable.  Otherwise there would be no way to identify “high” mail queue counts for alerting.

 

define(`confQUEUE_LA’, `16′)dnl

Load average at which queue only functionality is engaged

 

define(`confREFUSE_LA’, `48′)dnl

Load average at which SMTP connections are refused

 

define(`confDELAY_LA’, `30′)dnl

Load average at which sendmail will delay one second on SMTP commands

 

define(`confMIN_QUEUE_AGE’, `5m’)dnl

Minimum time a message has to sit in the queue before it is retried

 

define(`confTO_HOSTSTATUS’, `2m’)dnl

If a host has been denoted as unavailable, the status will be cached for this duration.  After the interval expires, connection to the host will be retried

 

define(`confMAX_DAEMON_CHILDREN’, 2000)

Maximum number of children processes permitted.  Sendmail will reject subsequent connections once this number has been reached.  Very important to have something defined on the DMZ servers.  Default is infinite and it is possible for a server to become unresponsive and need to be rebooted with out of memory errors when too many processes are spawned.

 

define(`confTO_IDENT’, `0′)dnl

Timeout for responses to IDENT

 

FEATURE(`no_default_msa’,`dnl’)dnl

The default MSA options are not used, but rather explicitly defined in the DAEMON_OPTIONS directive

 

FEATURE(`smrsh’,`/usr/sbin/smrsh’)dnl

Shell used for command line mailing programs, not really pertinent in our case

 

FEATURE(`mailertable’,`hash -o /etc/mail/mailertable.db’)dnl

This file will be discussed in more detail later, this directive specifies the use of a mailertable and the location of the file.

 

VIRTUSER_DOMAIN_FILE(/etc/mail/virtuser-domains)dnl

This file will be discussed in more detail later, this directive specifies the location of the file containing virtualised domains

 

FEATURE(`virtusertable’,`hash -o /etc/mail/virtusertable.db’)dnl

This file will be discussed in more detail later, this directive specifies the use of virtual user mapping and the location of the file containing said mappings

 

FEATURE(always_add_domain)dnl

Appends the local host domain to even locally delivered mail.

 

FEATURE(use_cw_file)dnl

Alternate host names are in /etc/mail/local-host-names – machine aliases

 

FEATURE(use_ct_file)dnl

Users who can set alternate envelope from addresses without generating a warning message.  File is /etc/mail/trusted-users

 

FEATURE(local_procmail,`’,`procmail -t -Y -a $h -d $u’)dnl

Specifies program to use as the local mailer, and command options

 

FEATURE(`access_db’,`hash -T<TMPF> -o /etc/mail/access.db’)dnl

This file will be discussed in more detail later, this directive specifies the use of an access restriction table and the location of the file.

 

EXPOSED_USER(`root’)dnl

 

 

DAEMON_OPTIONS(`Port=smtp, Name=MTA’)dnl

This is where the settings for the MSA are defined.  Port=smtp uses the default port of 25, or an alternate port can be used.  Addr=# can be included to bind sendmail to a specific address (including 127.0.0.1 for localhost access only).

 

INPUT_MAIL_FILTER(`vamilter’,`S=inet:3333@localhost,F=R,T=S:10m;R:10m;E:10m’)

Defines a “milter” – mail filter.  The port and destination of the milter must be included with S=.  S=inet is a IPv4 socket, S=inet6 is an IPv6 socket, and S=local is a Unix-domain socket (/var/run/)

F= defines an action to take on failure, R (reject), T (tempfail), or if no option is included just pass the message through sendmail and ignore the milter

T= defines timeouts for sendmail’s communication with the milter:

C         Connect timeout

S          Sending timeout (sendmail transmission of data to milter)

R          Reading timeout (for reply from milter)

E          Overall timeout (between sending end of message and final ack)

 

MASQUERADE_AS(`vibiant.dnsaliascom’)dnl

FEATURE(`masquerade_envelope’)dnl

FEATURE(`allmasquerade’)dnl

MASQUERADE_DOMAIN(`arlitljl.com’)dnl

MASQUERADE_DOMAIN(`homedomain.local’)dnl

This group of directives are all interrelated.  Masquerading is basically replacement – MASQUERADE_AS is the domain which will be used in place of the domains identified in MASQUERADE_DOMAIN lines.  In this case, both @arlitljl.com and @homedomain.local will be overwritten with @vibiant.dnsalias.com.  The directive FEATURE(masquerade_entire_domain) could be included to replace any subdomain of the masquerade domains (e.g. @secured.arlitljl.com, @public.arlitljl.com, and @restricted.arlitljl.com in addition to @arlitljl.com)

Masquerade envelope applies the masquerade to the envelope information and allmasquerade applies the masquerade to everything in the envelope, including cc:, from: and to: — this directive is important when we mask an acquired company’s email domain with our own.

 

FEATURE(`accept_unresolvable_domains’)dnl

Allows the use of domains in the MAIL FROM command to be invalid network and sender domains.  Since some people do not manage to configure their mail servers properly, we are less restrictive here to avoid complaints.

 

LOCAL_DOMAIN(`localhost.localdomain’)dnl

Domain(s) for which the server will accept local delivery – since our servers do not really deliver mail the domain should include the localdomain to prevent accidental misdirection of mail

 

MAILER(smtp)dnl

MAILER(procmail)dnl

Defines mailers to be used in addition to local – these should be the last lines of the mc file

 

 

 

When you make changes to the sendmail.mc file, you will need to run the macro processor to update the CF file.  You can see the results by running:

m4 sendmail.mc | less

 

The text which will be used in sendmail.cf will be displayed on the screen.  To actually commit the changes, use:

m4 sendmail.mc > sendmail.cf

or just type

make

 

Make will update all of the files in /etc/mail, so ensure you like all the changes you have made, not just the changes to sendmail.mc

Sendmail

 

Sendmail is an OpenSource SMTP mail transfer agent implemented on many different Unix platforms. The original version of sendmail, written in the early 1980’s, was written by Eric Allman at Berkeley.  The release code base of sendmail is version 8.  The packages and source can be found at http://www.sendmail.org.

 

Sendmail in its current iteration is configured by many individual files.  All of the configuration options available within the product are well documented at http://www.sendmail.org/doc and http://www.sendmail.org/m4/readme.html.

Future Releases:

There is not a code base 9, but rather SendmailX which has now become MeTA1 (http://www.meta1.org/).  MeTA1 does not include a local delivery agent or mail submission program – it is intended as a conduit for email only.  It will use a single configuration file with a radically different syntax.  Currently, the summer of 2007, the code is in a pre-alpha release.  So, it will be a while.

 

Practical Information:

We back the sendmail configuration files up nightly to NEOHTWNLX810 (/home/NDSSupport/Backups/).  You can restore the files from /etc/mail (or /opt/pmx4/sendmail/etc/mail as appropriate) to a rebuilt server and return the server’s complete configuration.

 

Mail Queues:

Sendmail stores unsent relayed messages in /var/spool/mqueue.  Unsent locally submitted messages will first be in /var/spool/clientmqueue.  Within the mqueue folder, each message has two separate files, one for the header information and a second for the message data.  To count the number of messages queued for delivery, then, you need to divide the number of files within /var/spool/mqueue in half:

echo `ls -al /var/spool/mqueue | wc -l` / 2 | bc

 

New Email Domain Configuration:

We have all of the resources required to establish a new email domain.  (Registration may well be required for a new DNS zone).  To establish a new publicly functional email domain from an existing DNS zone:

  • Create MX records within the DNS zone.  The 10 weight record should point to neohtwnlx821.windstream.com. and the 20 weight record should point to neohtwnlx823.windstream.com.  It is important in these MX records to include the period trailing the hostname
  • On NEOHTWNLX821 and NEOHTWNLX823, edit /etc/mail/access to include the new domain with RELAY
  • On NEOHTWNLX821 and NEOHTWNLX823, edit /etc/mail/mailertable to direct mail to the appropriate destination (unix host, Exchange server, etc)
    • The destination must be configured to accept email from LX821/LX823
  • If internal mail routing needs to be established, an SMTP connector needs to be added to the Exchange organization.  Additionally, mailertable entries should be created on at minimum LX825, LX830, LX833, and LX828
  • If mail should be delivered to mailboxes in the Exchange organization, the new domain should be added to the “Additional Mail Domains” recipient policy.  In this case, the SMTP connector would not be created with Exchange.

 

Sendmail Troubleshooting:

To display information about queued messages:

sendmail –bp

Or to obtain analysis of the domains and addresses within the mail queue, use the perl scripts located in /root/bin:

frombydomain.pl                       Ascending count of sender domains

frombyemail.pl                          Ascending count of sender email addresses

tobydomain.pl                           Ascending count of recipient domains

tobyemail.pl                              Ascending count of recipient email addresses

 

To retry the queues messages with output to the terminal:

sendmail –v –q –C/etc/mail/sendmail.cf &

 

To retry a specific recipient domain’s queue:

sendmail –v –qRthedomain.com –C/etc/mail/sendmail.cf &

Or a specific sender domain’s queue:

sendmail –v –qSthedomain.com –C/etc/mail/sendmail.cf &

 

To retry a specific message ID:

sendmail –v –qImsgidgoeshere –C/etc/mail/sendmail.cf &

 

Add “-d8.11” to the queue retry commands to output debug level diagnostic information to the terminal.  E.G.

sendmail –v -qIl6UJtCE3021014 –C/etc/mail/sendmail.cf –d 8.11

 

Linux Authentication Over Key Exchange

On Linux, you can log in without logging in (essential for non-interactive processes that run commands on remote hosts, but also nice accessing hosts when you get paged at 2AM to look into an issue). The first thing you need is a key. You can use the openssh installation on a server to generate the key:

ssh-keygen -t rsa -b 2048

Or you can run puttygen.exe (www.chiark.greenend.org.uk/~sgtatham/putty/download.html or our Samba share on neohtwnlx810, or cwwapp556, or twwapp081) for a GUI key generator.

Click the “Generate” button & then move the mouse around over the blank area of the PuttyGen window – your coordinates are used as random data for the key seed.

Once the key is generated, click “save public key” and store it somewhere safe. Click “save private key” and store it somewhere safe. ** Key recovery isn’t a big deal – you can always generate a new public/private key pair and set it up. Time consuming if you out your key is a lot of places, but it isn’t a data loss kind if thing. *** Anyone who gets your private key can log in as you anywhere you set up this key exchange. You can add a passphrase to your key for additional security.

Once you’ve saved your keys, copy the public key at the top of the window. You don’t have to – you can drop the newline characters from the saved public key file, but this saves time.

Go to whatever box you want to log into using the key exchange. ** I have a key exchange set up from my Windows boxes (laptop, cwwapp556, and twwapp081) to myid@neohtwnlx810. I then have a different key used from myid@neohtwnlx810 to all of our other boxes. This allows me to change my on laptop key (i.e. the one more likely to get lost) out more frequently without having to get a new public key on dozens of hosts.

Once you are on the box you want as the ID you want (you can do a key exchange to any id for which you know the password – so you can log into ldap@vml106 or sendmail@vml905 and do this). Run “cd $HOME/.ssh” – if it says no such file, run “ssh localhost” – it will ask you if you want to store the server public key – say yes, that creates the .ssh folder with proper permissions. Ctrl-c and cd .ssh again. Now Check if there is an authorized_keys, authorized_keys2, or both. Vi whatever ones you find – if there aren’t any, try “vi authorized_keys” first– go into edit mode and paste in the public key line we copied earlier. Save the file. If you get an error like “The server refused our key”, you can “mv authorized_keys authorized_keys2” (what I need to do on RHEL <=6.7 although we probably should look into that!).

In putty, load in your configuration for whatever host we just pasted the public key into. Under Connection -> Data, find the “Auto-login username” section. Put in whatever ID you used when you added the public key (my use case is me e0082643 … but if you were using ldap@vml106, you would put ldap in here)

Then under Connection ->SSH->Auth, find the “private key file for authentication” section and put in your private key location. Go back to the Session section and save the configuration changes.

Now connect & you shouldn’t need to supply a password (or you only need to supply your key passphrase).

** OpenSSH automatically uses the id_dsa or id_rsa (private keys) from $HOME/.ssh/ when you attempt to authenticate to other hosts. If the destination id@host has your public key in its $HOME/.ssh/authorized_keys (or $HOME/.ssh/authorized_keys2 if you happen to be using a deprecated version), then you’ll get magic key based authentication too. Caveat: on the source Linux host, your private key cannot be group or other readable. Run “chmod go-rw $HOME/.ssh/id_rsa” to ensure it is sufficiently private, otherwise auth will fail due to permissive access.

** Once you have a key exchange in place, it is fairly easy to update your key. Create a new one but do not yet replace your old one. You can make a shell script that updates all remote hosts with your new public key – per host, run:

ssh user@remoteHost “echo \”`cat $HOME/.ssh/new_id_rsa.pub`\” >> $HOME/.ssh/authorized_keys”

Once the new public key info has been pushed out, test it using “ssh -i new_id_rsa user@remoteHost” and verify the key authentication works. Once confirmed, rename your old id_rsa and id_rsa.pub files to something else. Then rename your new_id_rsa to id_rsa and new_id_rsa.pub to id_rsa.pub

Role Based Provisioning

I had done a good bit of data mining research to build out an role based provisioning analysis engine. A decade ago, at University. I had a friend in high-school who had just completed her PhD project on using technology to enhance education in K-12 education. She performed paid consulting services to implement a technology approach package in school systems. Except for the one which employed her — even though she offered her package and guidance for free as part of her employment. I remember thinking that seemed a bit insulting. Here all sorts of people are handing over taxpayer/tuition money for your expertise, but the people who employ you won’t even take it for free.

Well, my company was never much for role based provisioning. Even algorithms I’d built as part of my own research projects. Ohhh, they were all for it in theory. Get the data mining in place, figure out what everyone has, build the templates. Now who signs off on all customer service reps getting access to the billing system and the entire corporate finance group getting access to the financial record system? Anyone? Hello?

Because, in the real world, some finance flunky is going to embezzle some money. And some customer service kid is going to credit back his friends accounts. At which point who said so-and-so could access such-and-such. And no one wants their name associated with that decision. Which makes sense — the individual manager hired the person. Trusted the person. Is responsible for ensuring that trust was warranted.

I am proposing a new approach to role based provisioning. We retain the data mining component. We have access templates built on that data. But we use the template to form a provisioning request. On hire or job transfer, the manager receives a notice to go review the access request form. They can add/remove items at will. They can click to compare access with another specific individual on their team. But before any of this access is granted they click the “I say this person can have this access” button. Voila, no single person responsible for all electronic malfeasance within the company.

Women In STEM

Some Google engineer failed to heed the parable of Harvard President Larry Summers – suggest in any way that women and men are different, and there will be an uproar. What’s ironic is that the main jist of the guy’s monologue (available online) is that not discussing differences between men and women because doing so is insensitive yields diversity programs that are ill suited for their goal. And that companies make business decisions on how close to a 50/50 split they want to get. (If having parity in gender representation was the highest priority in hiring decisions, then a company would only interview female candidates until parity was reached.). And the general reaction online has essentially proved the guy’s point. A reasonable argument would have been challenging the research he cited. Doing so is a fairly easy task. Baron-Cohen, for instance, couldn’t even reproduce his own results. In other cases, the Google engineer conflates correlation and causation. Men don’t take paternity leave because of retribution — my husband was terminated after taking this two weeks of vacation after our daughter’s birth. That’s not even asking for paternity leave — that’s attempting to use vacation time as paternity leave. I experienced more stress as a woman entering an IT support department not because I have a female brain but because my capabilities were questioned (you’re going to fix my computer!?) and some coworkers felt entitled to make sexual advances towards me (I doubt any new male employee was asked to provide his measurements and describe his genitalia to provide a picture to accompany his coworker’s pleasuring himself to the individual’s voice on conference calls).

The mistakes people make, both in the case of Summers and this engineer, is mistaking population-wide averages for attributes of an individual and conflating ‘different’ for ‘inferior’. The engineer wasn’t wrong in one way – it is difficult to discuss gender norms and studies. Trying to divorce emotion from discussion of gender-specific behaviours and preferences isn’t a battle worth fighting. There have been too many badly formed studies designed to prove the superiority of some majority group for any new study to be approached seriously. But he could have made the same suggestions without the contentious topic of gender norms and diversity programs.

Gender aside, different people think differently and have different preferences. I don’t believe this is a contentious declaration. I have artistic friends, I have detail oriented friends, I have creative friends who are not artistic. I know people who love cats and people who love jumping out of perfectly functional aircraft. Introverts and extroverts.

Historically, computer software was not used by people. Programmers hired back in the 60’s and 70’s were not brought in as user experience designers. Text interfaces with obscure abbreviations and command line switches were perfectly acceptable code. They progressed in the field, moved up, and then hired more people like themselves. As computers were adopted, both in business and personally, computer software was slow to adopt ‘usability’ as a goal. Consider the old blue screen word processor. When I left University in 1996, I went to a temp agency in the hope of getting a paycheque that week. They had a computer competency test — figured I would ace it, I’d been running student IT support at the Uni for about eighteen months. I installed Windows 95, IRIX, and AIX and was fairly proficient using any of them. I served as a TA for intro to word processing an excel classes – knew Office 95 better than most of the instructors by year end. Then the temp agency sat me down in front of a computer with an ugly blue screen. What the hell?? I later discovered this old word processing package was common throughout businesses (Universities get grants and buy the latest cool ‘stuff’. Businesses reluctantly forked over a couple hundred grand ten years ago and are going to use that stuff until it decomposes into its component molecules.). People start out with a strip of paper over their function keys so they have a clue how to do anything beyond type on the ugly blue screen. Of course the temp agency was looking for competent computer users so didn’t have the quick ref strip. I couldn’t even start the test (open the file whatever.xtn).

Look at sendmail’s cf configuration file, or search for vim quick ref guides. Even git – sure there are GUI integrations, but the base of git is cryptic command line stuff that you commit to memory. This is not software developed by people who are people focused. Initially with the personal computer in the 80’s, usability was not a concern – “computer users” were in some way skilled and learned to work around the software. With public adoption of the Internet in the 90’s, and dramatically accelerating in the 2000’s and 2010’s — people began to use software. In mass. And new users demanded ‘easy’ to use, intuitive software. User experience engineering became a thing. Software was released to ‘regular’ users to obtain usability feedback.

But the developers behind the software are still, predominantly, the same personality types who developed code for ENIAC. This dichotomy creates an opportunity for the company’s recruitment and hiring teams to give our software an edge. As a company writing software that will be used by people, we think developers who lean toward people on the Things — People dimension, or who score as Social or Artistic on Holland’s personality types, etc provide value to the company. Since we have a lot of things / realistic or investigative types here already, we want our recruiting and hiring practices to create a balance with the other personality types. And we should look at ways to change our processes and make engineering work better align with the interests of people who are more people / cooperative and social or artistic.

Even if the argument was considered flawed, I don’t believe it would receive the widespread distribution and uproar the “it’s all about gender” version encountered. Someone could say “we’d rather make our current staff better at UX” or “we don’t think we need to change our practices to appeal to these other personality types”. Whatever. Even if he still offended his coworkers (I can too do artistic stuff!) or still managed to come off as entitled and whiny, I doubt the guy would have been fired.

Visual Studio Code

We found a free, open source code editor from Microsoft called Visual Studio Code — there are downloadable modules that include formatting for a variety of programming languages (c#, cpp, fortran), scripts (perl, php), and other useful formats like MySQL, Apache httpd config files. It also serves as a GUI front end to git. And that is something I’ve been trying to find since I inherited a git server at work — a way for people to avoid having to remember a dozen different git commands.

Business Practices To Avoid

Don’t ignore your customers. Seems obvious, but failing to engage customers undermines large corporations. I worked for one of Novell’s last big customers back in 2000-2010. We had the misfortune of being in the same territory as their biggest customer, FedEx, so got little sales attention. We were having problems managing computers without using the Active Directory domain — the dynamic local user Zen component that hooked the Novell GINA and created/maintained local user accounts had been used before an NT4 domain even existed within the company. In perusing their web site, I identified a product that perfectly met our needs *and* managed mobile devices (which was an up and coming ‘thing’ at the time). Why, I asked the sales guy, would you not pitch this product to us when we tell you about the challenges we are trying to address? No good answer, but it really was a rhetorical question. There wasn’t a downloadable demo available, you had to engage your sales rep to get a working demo copy — I asked for one, and he said he’d get one to me when he got back to his office.

Nothing. Emailed him a week later in case he just forgot. Oh, yeah, I’ll get that right out to you. A few weeks later, emailed him again. A few weeks later — well, let’s be serious here. We started using Exchange in 2000, and had an Active Directory domain licensed for all users anyway. We were willing to consider paying real money for the Novell product because the migration path was easier … but from a software licensing perspective, switching workstation authentication to AD was a 0$ thing. Needed a few new servers to handle authentication traffic – I think I went with five at about three thousand dollars each. Deployment, now that’s a nightmare. I wrote custom code to re-ACL the user profile directory and modify the registry to link the new user.domain SID to the re-ACL’d old profile directory. It got pushed out via automated software deployment and the failures would call in each morning. Even a 1% failure rate when you’re doing 10,000 computers a week is a lot of phone calls and workstation re-images. (At a subsequent employer, we made the same change but placed workstations into the domain as they were re-imaged for other reasons. New computer, you’re in the domain. Big problems with your OS, you’re in the domain. Eventually we had a couple hundred computers not yet in the domain and the individual users were contacted to schedule a reimage. Much cleaner process.)

The company didn’t last much longer — they purchased SuSE not much later. The sales guys came back – we used RHEL but would have happily bundled our Linux purchases into the big million dollar contract. How much are you looking to charge for updates? Dunno. How much is support? Dunno. Do you know anything about the company’s sales plan for SuSE? Not a thing. Well … glad you could stop by? I guess.

As far as software companies go, this is ancient history. But it’s something I think of a lot when dealing with Microsoft these days. There’s a free mechanism that allows you to use your existing Active Directory to store local workstation admin account passwords. Local workstations manage their own passwords — no two passwords are the same; you can read the individual computer’s password out of AD and provide it to the end user. Expire the computer’s local admin password and next time it communicates with the domain, the password will be changed. Never heard of it from the MS sales guy – someone found LAPS through random web searching. Advanced Group Policy Management that provides auditing and versioning for group policies – not something our MS reps mentioned. Visual Studio Code – yet another find based on random web searching. I know it isn’t the sales guy’s job to tell me about every little bit of free add-on code they have created, but isn’t it in their best interest to ensure that the products that we have become an intrinsic part of our business processes? I tell our SharePoint group that all the time — there are a lot of web based content management platforms. If all you use it for is avoiding web coding … well, I’ve got WordPress that does that. Or some Atlassian wiki thing. And some Jive wiki thing. And some Xerox document repository that has web pages. You need to make something unique to your product intrinsically entwined with business oeprations so no one would ever think of replacing your product.

Setting Up A New Email Domain – With SenderID and DK/DKIM TXT Records

If you are going to begin using e-mail on a sub-domain of an existing zone, you do not need to do anything special to register the sub-domain. If this is a new domain, it needs to be publicly registered first. The examples used here-in will be a mail domain subordinate to windstream.com. If you are performing the tasks for a new zone, create the new zone first.

To allow e-mail exchange with a domain, create MX record(s). For a third party vendor, they need to tell you what their mail exchangers are. For internally hosted services, use the same assignments and weights from Windstream.com. As of 19 July 2017, those are:

windstream.com  MX preference = 10, mail exchanger = dell903.windstream.com

windstream.com  MX preference = 20, mail exchanger = vml905.windstream.com

windstream.com  MX preference = 110, mail exchanger = neohtwnlx821.windstream.com

Within Infoblox, you need to be using the external DNS view. You can create matching records internally – we tend not to create internal MX records as it prevents internal multi-mailer infections from routing messages. In the proper zone, click Add => Record => MX Record

The mail destination will be the subzone (here we are exchanging e-mail with @ljrtest.windstream.com)

Save this change and create the other MX records. ** You need to clue the servers into the fact this domain is now valid. ** On each server, edit /etc/mail/access and add

Ljrtest.windstream.com  RELAY

If you want to use the virtusertable to map addresses within the domain, you also need to add the domain name to /etc/mail/virtuser-domain

Finally, you need to send the mail somewhere. Edit /etc/mail/mailertable and set a relay destination of somewhere that knows about the domain and is processing mail for it (is that our Exchange server? Someone else’s Unix server? An acquired company’s mail server? … depends on what you are trying to do!)

rushworth.us    relay:[10.5.5.85]

Save, make, and restart sendmail … now you have a fully functional external email domain.

Now secure it – that means adding sender policy framework (SPF), domain key (DK), and domain key identified mail (DKIM) records.

SPF and SenderID Records

There are both sender policy framework (v1) and SenderID (v2) records – you can create both. Not too many people use SenderID anymore, but I invariably end up finding the one guy who is evaluating mail validity purely on SenderID when I create just the SPFv1 record.

In InfoBlox, select Add => Record => TXT record. The mail destination from the MX record needs to be put in the “Name” field. Then the text value – what is that?

Quick answer is it depends. A SPF record lists all mail servers that should be sending e-mail for a domain. Is that just our MX servers? The MX servers plus the netblocks for the internal relays? Some third-party vendor?

Our MX servers and a few netblocks would be:

SPF V1: “v=spf1 mx ip4:166.150.191.128/26 ip4:98.17.202.0/23 ip4:173.186.244.0/23 ip4:65.114.230.67/32 ip4:64.196.161.5/32 ?all”

SPF V2: “spf2.0/pra mx ip4:166.150.191.128/26 ip4:98.17.202.0/23 ip4:173.186.244.0/23 ip4:65.114.230.67/32 ip4:64.196.161.5/32 ?all”

If there is a third-party vendor, they may provide an include statement for our SPF record – this is a way of referencing an external company’s SPF record within your own. You’ll see “include:mktomail.com” in our SPF records where Marketo sends mail on our behalf.

The final bit – we use ?all which means these may not be all of the servers sending mail on our behalf – we are not making an assertion beyond saying the listed sources are good. You may see vendors requesting “~all” which is a soft fail — still allows mail to pass if the sender does not match the list. The strictest is “-all” which fails mail coming from any source not in the list.

Does it matter? Depends – if a recipient has configured their mail servers to reject mail based on SPF and you use -all … mail from servers not on the list will be rejected. Not a lot of companies are thusly configured, though … so there’s not a whole lot of effective difference.

The final step is to test the SPF record. The easiest way to do so is an online SPF test site like http://tools.bevhost.com/spf/

I usually test both a host on the list and one not. The ones on the list will pass. The ones not on the list may fail (with -all) or report as neutral (?all).

DK/DKIM Records

DK and DKIM are public/private key based header signatures that assure the validity of the e-mail sender. The first thing you will need is a public/private key pair – these do not have to be trusted keys from a public certificate authority. A vendor or another internal group may provide their own public key for inclusion in our DNS record. Do not provide our private key to anyone else – keys are free, and if they are unable to generate one of their own, make one for them!

You can use openssl (openssl genrsa -out dkimkey.private 1024 followed by openssl rsa -in dkimkey.private -out dkimkey.public -pubout -outform PEM), an online generator, or the Web CA server. Once you have a key pair, you need a selector. This is because different mail servers may send mail for a domain whilst using unique private keys to sign the messages. The selector can be anything – the selector name is configured in the mail server. It is visible in the mail headers and mail logs, so don’t elect to use anything rude. Stash the private key on your mail server (or provide it to the mail server owner) and put the public key in a DNS TXT record “selectorname._domainkey.sub.domain.gTLD”. The k= indicates the key type (rsa in the openssl example), you can indicate signatures are being tested “t=y” if desired, and then paste the bits between —-BEGIN PUBLIC KEY—- and —-END PUBLIC KEY—- into the p= part.

k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0s07391Axpsi/G0PTsO1 io1LOXSZ0bWAku4bgJ//swZj8OlFvDo59n9qC2Wsd21afI3si/PdDoDP69HNdgAT tIPaK6J0UqcCo9RNSiM3uA+GngdgTupwE2KrKn9/WQbC0tDA8e64e0HBHXwcF/ru OF+18LvpoA/cu1TFUNk0z+GSvqQ4L79k+gZWALvJL7kvCMIu3Gy8ZJpNerRSdrYH l/Nvg87dlZ+9yRI33IwNYpVl1UIrd6qLnGgM1xDMF+Sn21Obd06FOkV5ObXqKBPv 7gMhsUOPu8cIWK7wrd143wH5sWWX1VCBhhIEv1GFp6+SotvZayH5fQ/ri+BjWYzf PwIDAQAB

You should have an author domain signing practices record (_adsp._domainkey.sub.domain.gTLD) – this tells recipients what to do if a message is not signed. The content is “dkim=all” when all mail from the domain is signed. If all mail is signed and anything not signed should be dumped, then the content is “dkim=discardable”. This does not ensure that unsigned messages are discarded – that decision is up to the individual mail recipient configurations. To make no assertion, use “dkim=unknown”.

You should also have a _domainkey.sub.domain.gTLD record – you can include “t=y” when you are testing – this instructs recipients to treat signed and unsigned mail no differently. You can include notes (n=), a responsible party for the domain (r=). The important one is o= … “o=-“ means all mail from the domain should be signed, “o=~” means some mail from the domain may be unsigned.

Then test the records – you can send a message to autorespond+dkim@dk.elandsys.com and receive back a very detailed report on the DKIM validation, or you can use a web-based validation tool that checks only the DNS components.

Bar Codes

I realized, recently, that my experience in manufacturing inventory management systems is actually useful for smaller craft businesses. Someone inquired about using bar codes in their soap making business. The first question is why are you using bar codes. For personal use (like inventory management) or codes used by outside parties? Or both — you can have both internal maintained inventory management bar codes and a UPC maintained code for finished products.

If you are trying to sell products in a store that uses laser scanners for checkout, then you need to use a system with managed number assignment. Otherwise two companies could randomly assign the same code to a product — you ring up a bar of soap and get charged for a hundred dollar handbag. What that system *is* depends on where the product would be sold (and, to some extent, what the product *is* — books use an ISBN system). UPC in the US (https://www.gs1us.org), EAN in the EU (https://www.gs1uk.org). The price to use these codes depends on how many unique products you have (https://www.gs1us.org/upcs-barcodes-prefixes/get-started-guide/1-get-a-gs1-us-issued-company-prefix). Up to 10 codes for a 250$ initial fee plus 50$ annual renewal. Up to 100 codes is a 750$ initial fee plus 150$ annual renewal. Up to 1,000 codes is 2,500$ initial fee plus 500$ annual renewal. The price tiers are economical for companies that do not have variants of a single product (different sizes, different colours) because multiple codes are not used for essentially the same product.

I’ve only worked with companies that manufacture single variations of a product. In small craft manufacturing, the number of codes you need can get out of control. Using registered bar codes creates a financial incentive for streamlining product offerings — you could package your bath bombs individually, in two packs, three packs, four packs … ten packs *but* that uses nine different UPC codes! Add a pot of lip balm, a tube of lip balm, a guest bar of soap, and a full size bar of soap and the the renewal fee triples. Some small vendors will accept a single code for same-price items (“4 oz soap bar” or “bath bombs, four pack”), but larger vendors require a unique code for each unique iteration of the product because they manage their inventory through UPC codes. You need to understand who will be using the codes and what their requirements are before you can determine how many codes you need to purchase.

Does purchasing a single UPC through a reseller make sense? Again, the individual retailer requirements need to be checked — some companies require the company prefix be registered to the manufacturer (i.e. you cannot use a reseller to purchase a single UPC code). Assuming your intended customer allows resold codes, the cost effectiveness depends on how many products and for how long you want to maintain your codes. The reseller structure is good for someone test-marketing in a retail store – if the market test does not pan out, you are out ten bucks (current price from a quick Google search). Even long term, a single UPC reseller is cost effective for up to five products. If you have nine products, you save money registering with GS1 in the third year. Seven products breaks even after five years. Six products breaks even after ten years. But verify the services offered by the reseller — how do you update your product registration?

Printing the bar codes is fairly trivial — there are UPC and EAN fonts available. Some are free, some cost money. You type the proper characters (I prefer fonts where ‘9’ on my keyboard is the 9 bar code. A lot of free fonts are mapped oddly – like you need to type ‘c’ to get a 9) and change the font. I also prefer fonts with human-readable characters under the bar code. Firstly this confirms I’ve typed the proper thing, but it also allows for manual code entry in case the bar code gets obscured. You can print the code on your product wrapping, or include the code in your packaging design and outsource package production.

Could you use the UPC/EAN codes for inventory management? Sure — raw materials you purchase may already have a unique code assigned. Scan the bar code, enter the quantity … voila. But if you are purchasing raw materials that are not already coded … there’s no reason to spend money on a prefix that allows you to code all of your inventory! UPC prefix assignments are a little bit like network blocks — there are different “size” blocks that allow different numbers of products to be registered. A prefix block that allows up to 10 products costs a lot less than a prefix block that allows ten thousand products. If you grow a bunch of different botanicals in your garden, allocating a registered code to each item could get quite costly.

As an inventory management system (the majority of my barcode experience), you can use whatever format bar code and whatever numbering system you like. The number doesn’t need to mean anything to anyone else – and it does not need to be globaly unique – so the entire process is a lot easier. If the manufacturing company next door uses your code for resistance wire for their quart bottles … who cares. As long as you have a database that indicates that item 72 is magnesium oxide powder, people scanning inventory against your database will see magnesium oxide powder.

For printing bar codes, there are fonts available for free online. I’ve used code 39 in the inventory systems I’ve built out – to print the code, just type the numbers and change the font. We used sheets of sticky labels & printed the barcodes onto them – then stuck the label on the raw material bins. Work orders printed out on a form and had a sticky label for the product(s) being built. Scanning the product bar code brought up a list of materials that needed to be used and pull up the engineering draft for the product. Employees scanned raw materials out of inventory as they pulled parts, built the item, then affixed the label from the work order to finished product to scan the completed item into inventory. All of the number assignments were internal – generally using whatever manufacturing software the company already maintained, but I’ve done it in custom code with a PHP front end and MySQL backend too. You need a form for adding to inventory and a form for removing from inventory. Scan the bar code to input the item number, enter the amount being used, submit. You could even maintain your purchase orders and recipes as a batch of inputs — receive an order and check everything contained there-in into inventory. Select a specific recipe and check set amounts of ingredients out of inventory.

I generally also create a reconciliation form — similar to how stores will go through and do manual inventory counts to true-up their database inventory with reality, a reconciliation form allows you to update the inventory database with the actual amount on hand. Personally, I store deltas from true-up operations too — if we should have fifty ounces of shea butter but only have forty seven because of over-measuring or small bits left on scoops, we want to know that there was a loss of three ounces. Once you know your inventory deltas, then you can include that loss into the cost of goods produced.

Why would you want to put so much effort into tracking your inventory? I see a lot of people asking how someone calculates costs for finished products. Calculating cost is fairly easy if you track your inventory in and out (costs not associated with inventory [your time, electricity, space, taxes] still need to be accommodated). In the inventory database, you have an item number, a quantity, and a price per unit value. As inventory is checked in, the price per unit is adjusted to include the incoming items. A recipe — specific amounts of different items — can be represented as a cost. You can also track material cost over time (trend the price of an ingredient, see if there’s a better time to buy it) or compare costs for product reformulation – takes additional database space and a little extra coding, but it is good information to manage costs.

How to reflect shipping costs on incoming inventory is a personal decision. The easiest way is to divide the cost equally over the items – this works well for flat-rate shipped orders. You could also divide the shipping cost over the weight of the shipment — 10 dollars in shipping over forty pounds of materials is twenty-five cents per pound. Then a three pound item cost seventy-five cents in shipping. A ten pound item is 2.50$ to ship.

The question was specifically asked regarding soap making, but the methodology is valid for basically any industry or home business. Most of my experience was garnered in an electric heater element manufacturer. The approach is viable for recipe-based manufacturing (knitting, crocheting, sewing, soap making) and even non-recipe based manufacturing … you’d just need to pull materials from inventory as you use them.