Author: Lisa

More Corporate Tax Rate Bullshit

I’m never sure if ‘lower the corporate tax rate’ people are just completely ignorant of how business accounting actually works or just a pack of liars (not mutually exclusive, I know).
 
The idea they promote is that CapEx isn’t deductible like a business’s current expenses – CapEx gets depreciated over a number of years. If I buy a new snazzy machine for my manufacturing plant and pay half a million dollars for it, I actually deduct 100k a year for the next five years. Depreciation calculations are more complicated, but the crux of it is [cost] / [years over which product depreciates]. And there’s a whole table defining depreciation periods.
 
*But* section 179 deductions allow the full cost to be deducted the first year. These deductions have a 500k limit and a spending cap of like 2 mill. The whole thing is more complicated because there are years where bonus depreciation is a thing … but like the “OMG the corporate tax rate is 35%” (on business that have over 18 MILLION a year in taxable income) … “Lowering the corporate tax rate will spur investment” is only *maybe* true for companies talking about multi-million dollar investments. This isn’t something meant to help the small manufacturer. Say my small/medium business that sunk half a mill into a snazzy machine and *didn’t* depreciate it over time. Under Section 179, I deduct the whole equipment purchase this year … which is a bigger savings the *higher* the corporate tax rate happens to be. Thus I’ve got less incentive to invest in new equipment if the tax rate is lowered.
 
Since they’re talking about 35% tax rates, we’re writing tax code to benefit GE (Apple, Amazon, insert your favorite enormous company here) … it isn’t like capital expenditures aren’t written off income AT ALL. Depreciation is spread out over the useful life of the equipment. Computers depreciate over 5 years. Cars and trucks depreciate over 5 years too. Equipment used in the manufacture of musical instruments depreciates over 12 years.
 
What makes investing in large capital expenses more attractive? I’m GigantorGuitarCo and we’re talking two hundred eighty million dollars in receipts and a hundred fifty mil in taxable income. And I buy a six million dollar something-or-other to make guitars. At a 35% corporate tax rate, my tax deduction by depreciating that purchase is 2.1 mil. At the 20% corporate tax rate, I only reduce my taxes by 1.2 mil. And yeah it sucks that I had to outlay six million dollars this year and only got to save 175k on my taxes. But doesn’t it suck *MORE* to spend six mil and only save 100k on my taxes??
 
Now the theory is that lowering the corporate tax rate will leave the companies with more money *to* invest. In this case, GigantorGuitarCo didn’t *have* 6 million dollars and instead spent years using sub-optimal processes because they simply didn’t have the money to invest – regardless of how much they’d be able to save on taxes *by* making that investment. I’m paying 52 million in taxes at 35%, but next year my taxes, at 20%, will be 30 mill. Frees up 22 million dollars, and I use that money to buy a whole bunch of equipment. Honestly, my best case would be that the corporate tax rate was 20% for ONE YEAR. Lets me free up capital to invest in my business, then give me the maximum tax benefit as I depreciate out the equipment.
 
But that’s mathematics without thinking about business. As the CEO of GigantorGuitarCo … wouldn’t I use a loan (business interest is tax deductible too), hire a couple of new tax attorneys, or lose some equity and do a fund raising round to get that six million dollars if the machine was going to provide some huge benefit to my company? And if the machine isn’t going to provide that much benefit … why wouldn’t I take my 22 mil in tax savings and stash it somewhere? Buy the machine when we *need* it, or when tax rates go up and the ROI calculation is different.
 
Sure there are edge cases where lower tax rates will spur investment in the business — *some* CEOs raised their hand when Gary Cohn asked if they planned increased investments when the GOP tax plan passes. [Although these may just be die-hard trickle-down guys who will SAY anything to promote corporate tax cuts] But the entire point of business’s investment (and the rational for depreciating CapEx instead of allowing full cost deduction in the first year) is that the new thing-a-ma-bob adds value to your business. My six million dollar investment makes guitars better/faster/with less human labor, thus increasing my profit margin. Said another way, CapEx is meant to increase employee productivity. Short some dramatic surge in demand … increased productivity means *fewer* employees. Not stellar economic stimulus, that.

The Colloquial Occam’s Razor

Occam’s razor – it is futile to do with more things that which can be done with fewer – is colloquially rendered as “the simplest solution is the most likely”. We had multiple tickets opened today for authentication failures on an Apache web server. Each malfunctioning site uses LDAP authentication and authorization against an Oracle Unified Directory. Nothing in the error logs. The service account from the Apache configuration can log in and query the directory from the box using ldapsearch, so the account is valid and there is nothing in the OUD preventing access from this particular host.

That’s a puzzler, and I was about to take down a lot of web sites to reload the service with its log level set to debug. Not even sure what made me do it, but I went out to the groups and looked at their member lists. Oops. Something had gone wrong with the identity management platform and employee accounts had been cleared from the groups (all of the contractors were still members, which made it even stranger). Added a few people back into groups appropriate for their position, voila they could log into their site again.

No idea how the identity management group restored the memberships, but verifying people who should have been members (who had been members and had done nothing to remove their memberships) were actually members of the group saved a lot of time running through debug logs. Sometimes the simplest answer is the most likely.

Apple FaceID

The irony of facial recognition — the idea is that you trade some degree of privacy for enhanced security. There are 10k four digit codes – a 1:10000 chance of any specific code unlocking your device. Apple touted a one in a million chance of facial recognition unlocking your phone.

So you trade your privacy for this one in a million super secure lock. Aaaaand a Vietnamese security firm can hack the phone with a mask. Not even a *good* mask (like I take a couple of your pictures, available online, synthesize them into a 3d image and print a realistic mask).

This feat wasn’t accomplished with millions of dollars of hardware. It took them a week and 150$ (plus equipment, but a 3d printer isn’t as expensive as you’d think).

Boyd v. United States or Riley v. California provide fourth amendment protection for phone content … but that only means the police need a warrant. Fourth amendment, check. Fifth amendment … Commonwealth of Virginia v. Baust  or  United States v. Kirschner says that you while cannot be compelled to reveal a passcode to allow police to access your phone (testimonial) … a fingerprint is not testimonial, it is documentary. And can be compelled. As with a lot of security, one can ask why I care. If I’m not doing anything wrong then who cares if the police peruse my phone. But if I’m not protesting, why do I care if peaceful assembly is being restricted. I’m not publishing the Paradise Papers, so why do I care if freedom of the press is being restricted? Like Martin Niemöller and the Nazis – by the time they get around to harming you, there’s no one left to care.

Pumpkin Pie Poncho

I bought Candy Castle Pattern’s Pumpkin Pie Poncho pattern when it was first released. I finally made one today. It is a quick project. The pattern piece get cut up and isn’t really reusable. Saves paper if you are just making one, but requires extra printing or tracing if you are making multiples of the same size.

The pattern says it needs 1.25 yards of a 60″ wide lining fabric for a size 6. Problem is – I only had one yard of the flannel lining, and it was 42″. Looking at the pattern piece for the main body is not a rectangle – one side is a lot narrower than the other. Instead of folding the fabric in half and cutting two pieces along the folded line, I folded the fabric just enough the poncho body fit on the part with two layers. Cut one piece along the fold

Then unfold the fabric fold the *other* side down — there will be parts where there is only one layer of fabric – where the first piece was cut. Align the pattern piece so the widest section is away from the cut section. The narrow section of the pattern fits on the fabric with two layers. Cut the second poncho piece.

Unfold the fabric – there is a odd shaped bit diagonal from each poncho cutout – these can be used to cut the hood piece (or cowl). Voila – poncho lining from one yard of 42″ wide flannel.

When I started fitting the pieces together, I realized this could be done as a reversible poncho. Doing so required modifying the process a bit — the main piece fabric and lining were still aligned right sides together.

I used clips instead of pins, so Anya was able to ‘test’ the poncho as it was being constructed.

Serged along the bottom curve, turned right way about, and top stitched. The pocket fabric was still sandwiched between to attach it.

The top stitching runs right along the serged part, so it’s a little bit stiff and puffed up.

Then the fabric and lining along the arms were stitched separately – leaving the seams encased inside the ponch. The main fabric of the hood was lined up, right sides together, and serged. The lining fabric was the tricky bit – I was able to line it up, right sides together, and serge it for all but about 8″ – moving the still-opened hole along the seam being sewn.

I then turned the edges over and hand-stitched the remaining bit that is right along the front neckline. Anya doesn’t like hoods that wrap around her neck (although she’ll wear a scarf, go figure!), so I modified the hood to have a small gap along the front.

The process is a little more difficult, but we’ve got a pawprint poncho and a snow leopard one. There’s no pocket on the flannel side — mostly because I didn’t have enough fabric 🙂 But she keeps her arms on the inside and uses the snow leopard pocket.

 

Ray Moore

I assume the crux of the support for Moore’s alleged behaviour is consent. Doesn’t explain why anyone would need to trot out virgin births as an example of how OK underage sex is (uhh, *virgin* birth). Doesn’t speak to how non-consensual pussy groping is OK either.

Thing is – consent is a challenging with younger people. I remember *being* a 14-18 year old girl feeling urbane and sophisticated because some older guy was interested in me. Exactly as WaPo put it – “flattering at the time, but troubling as they got older”. Especially when I was older and saw other underage girls expressing the same pride in their relationship. However much some 30-something guy is willing to smile and nod while a young teen prattles on with her deep thoughts, intellectual stimulation was NOT what the guy is after … and it was dismaying to realize, in retrospect, the same logic applied to me.

The entire point of statutory rape is that people under whatever bright-line age of consent exists in the jurisdiction don’t have the wherewithal (i.e. experience with life) to provide consent. Modern society is moving that way — NY enacted resolutions over the summer that move the legal marriage age up to match the consent age: 17 (before that legislation, having the court/parents sign off on a marriage was an end-run around statutory laws). Someone who wants to argue that Moore’s actions were acceptable *because* the kids were OK with it … would they be willing to put forth legislation eliminating both balancing tests and bright-line ages??

OpenHAB Cloud Installation Prerequisites

We started setting up the OpenHAB cloud server locally, and the instructions we had found omitted a few important steps. They say ‘install redis’ and ‘install mongodb’ without providing any sort of post-install configuration.

Redis
# This is optional – if you don’t set a password, you’ll just get a warning on launch that a password was supplied but none is required. While the service is, by default, bound to localhost … I still put a password on everything just to be safe

vi /etc/redis.conf # Your path may vary, this is Fedora. I've seen /etc/redis/redis.conf too

# Find the requirepass line and make one with your password

480 # requirepass foobared
requirepass Y0|_|RP@s5w0rdG03s|-|3re

# Restart redis

service redis restart

Mongo:
# Install mongo (dnf install mongo mongo-server)
# start mongodb

service mongod start

# launch mongo client

mongo

# Create user in admin database

db.createUser({user: "yourDBUser", pwd: "yourDBUserPassword", roles: [{role: userAdminAnyDatabase", db: "admin"}]});
exit

# Modify mongodb server config to use security

vi /etc/mongod.conf

# remove remarkes before ‘security: ‘ and ‘authorization’ – set authorization to enabled:

99 # secutiry Options - Authorization and other security settings
100 security:
101 # Private key for cluster authentication
102 #keyFile: <string>
103
104 # Run with/without security (enabled|disabled, disabled by default)
105 authorization: enabled

# restart mongo

service mongod restart

#Launch mongo client supplying username and connecting to the admin database

mongo -uyourDBUser -p admin

# it will connect and prompt for password – you can use db.getUser to verify the account (but you just logged into it, so that’s a bit redundant)

MongoDB shell version: 3.2.12
Enter password:
connecting to: admin
> > db.getUser("yourDBUser");
{
        "_id" : "admin.yourDBUser",
        "user" : "yourDBUser",
        "db" : "admin",
        "roles" : [
                {
                        "role" : "userAdminAnyDatabase",
                        "db" : "admin"
                }
        ]
}

# Create the openhab database — mongo is a bit odd in that “use dbname” will switch context to that database if it exists *and* create the databse if it doesn’t exist. Bad for typo-prone types!

use yourDBName;

# Create the user in the openhab database

db.createUser({user: "yourDBUser", pwd: "yourDBUserPassword", roles: [{role: readWrite", db: "yourDBName"}]});

# You can use get user to verify it works

db.getUser("yourDBUser");
exit

# Now you can launch the mongo client connecting to the openhab database:

mongo -uyourDBUser -p yourDBName

# It will prompt for password and connect. At this point, you can use “node app.js” to launch the openhab cloud connector. Provided yourDBUser, yourDBUserPassword, and yourDBName match what you’ve used in the config file … it’ll connect and create a bunch of stuff

 

Strange spam

We have been getting spam messages with the subject “top level quality of paint bucket” both at home and at work. I get that it costs essentially nothing to send a million junk e-mail messages, so it doesn’t take a lot of sales for a campaign to be profitable. But are there seriously people who buy their paint buckets from cold e-mails? Especially e-mails that I thought were trying to sell me buckets of paint.

And how lazy is a spam campaign that uses static strings in the subject field?

The Politics Of Anger

Michael Kruse interviewed people out in Johnstown PA who had voted from Trump last year to see what they think of his performance thus far. Objectively, someone who campaigned on Muslim bans, enormous walls along the Mexican border, bringing back the steel mills, and bringing back coal mining … well, just another politician promising the world and delivering nothing. But these people still love Trump. And would vote for him again. Why?

It seems like voters want someone to be angry along with them. There is no easy solution, there is no painless solution … but no one wants to hear the truth. Or hear hard answers. But someone who obviously lies to them but conveys a story of their own victimization … that’s where they’re voting.

Coconut Almond Chocolate Bars

I made a homemade dessert inspired by Almond Joy bars. It’s got three layers – coconut, sliced almonds, and either chocolate or carob.

For the coconut layer, combine the following in a food processor and pulse until you’ve got a somewhat creamy well blended mix.

3 cups unsweetened coconut flakes
1/4 cup coconut oil
1/2 cup coconut cream
1/4 cup maple syrup

Line a pie pan with clingfilm and press coconut mixture into pan. Top with sliced almonds.

I then made both carob and chocolate sauce to spread on top. Melt 1/4 cup of coconut oil. Add 2 tablespoons of maple syrup. Then stir in either cocoa powder or carob powder until the mixture has the consistency of melted chocolate.

Spread chocolate or carob (I made it with half chocolate and half carob). Refrigerate for an hour so the chocolate sets.

Load Runner And Statistical Analysis Thereof

I had offhandedly mentioned a statistical analysis I had run in the process of writing and implementing a custom password filter in Active Directory. It’s a method I use for most of the major changes we implement at work – application upgrades, server replacements, significant configuration changes.

To generate the “how long did this take” statistics, I use a perl script using the Time::HiRes module (_loadsimAuthToCentrify.pl) which measures microsecond time. There’s an array of test scenarios — my most recent test was Unix/Linux host authentication using pure LDAP authentication and Centrify authentication, so the array was fully qualified hostnames. Sometimes there’s an array of IDs on which to test — TestID00001, TestID00002, TestID00003, …., TestID99999. And there’s a function to perform the actual test.

I then have a loop to generate a pseudo-random number and select the test to run (and user ID to use, if applicable) using that number

my $iRandomNumber = int(rand() * 100);
$iRandomNumber = $iRandomNumber % $iHosts;
my $strHost = $strHosts[$iRandomNumber];

The time is recorded prior to running the function (my $t0 = [gettimeofday];) and the elapsed time is calculated when returning from the function (my $fElapsedTimeAuthentication = tv_interval ($t0, [gettimeofday]);). The test result is compared to an expected result and any mismatches are recorded.

Once the cycle has completed, the test scenario, results, and time to complete are recorded to a log file. Some tests are run multi-threaded and across multiple machines – in which case the result log file is named with both the running host’s name and a thread identifier. All of the result files are concatenated into one big result log for analysis.

A test is run before the change is made, and a new test for each variant of the change for comparison. We then want to confirm that the general time to complete an operation has not been negatively impacted by the change we propose (or select a route based on the best performance outcome).

Each scenario’s result set is dropped into a tab on an Excel spreadsheet (CustomPasswordFilterTiming – I truncated a lot of data to avoid publishing a 35 meg file, so the numbers on the individual tabs no longer match the numbers on the summary tab). On the time column, max/min/average/stdev functions are run to summarize the result set. I then break the time range between 0 and the max time into buckets and use the countif function to determine how many results fall into each bucket (it’s easier to count the number under a range and then subtract the numbers from previous buckets than to make a combined statement to just count the occurrences in a specific bucket).

Once this information is generated for each scenario, I create a summary tab so the data can be easily compared.

And finally, a graph is built using the lower part of that summary data. Voila, quickly viewed visual representation of several million cycles. This is what gets included in the project documentation for executive consideration. The whole spreadsheet is stored in the project document repository – showing our due diligence in validating user experience should not be negatively impacted as well as providing a baseline of expected performance should the production implementation yield user experience complaints.