Tag: Security

Fortify on Demand Remediation – LDAP Injection

If you build an LDAP search criterion from user input, it’s possible for the user to inject unexpected content into the search. If I say my username is lisa)(cn=*) or lisa)(|(cn=*) … a filter of (sAMAccountName=$strUserInput) becomes something unexpected.

In php, there’s a filter to escape LDAP search filters – use ldap_escape()

$scriteria=ldap_escape("(&($strUIDAttr=$strUserLogonID))", null, LDAP_ESCAPE_FILTER);

Fortify on Demand Remediation – Header Injection: Cookies

Cookie injection vulnerabilities occur when user input is stored into a cookie. It’s possible for malicious input to include newline characters that would be parsed out as new elements in the cookie. As an example, if I send my user ID as “lisa\r\nadmin: true” … I’ve got a cookie that says the userID is lisa and admin is true.

With Fortify on Demand, you cannot just filter out \r and \n characters – Fortify still says the code is vulnerable. You can, however, filter out anything apart from alpha-numeric characters (and, I assume, any oddball character that has a legit reason to be included in the user input):

$strLogonUserID = filter_var(preg_replace(‘/[^a-z\d_]/iu’, ”, $_POST[‘strUID’]), FILTER_SANITIZE_STRING, FILTER_FLAG_STRIP_LOW);

Fortify on Demand Remediation – JSON Injection

This vulnerability occurs when you write unvalidated input to JSON. A common scenario would be using an Ajax call to pass a string of data to a file and then decoding that string to JSON within the file.

To get around the Foritfy scanning requirements you have to use base64 encoding on the string before sending it through the Ajax call:

var update = $.ajax({
    type: "POST",
    url: "SFPNotesUpdate/../php/validate_notes.php",
    data: { tableData: btoa(JSON.stringify(HotRegisterer.getInstance('myhot').getData())) },
    dataType: 'json'
});

When reading the input to decode the JSON string to an array you have to perform these actions in order:

  • base64_decode the input string
  • sanitize the input string
  • decode the JSON string to an array
$tbl_data = json_decode(filter_var(base64_decode($_POST['tableData']), FILTER_SANITIZE_STRING, FILTER_FLAG_NO_ENCODE_QUOTES), true);

Fortify on Demand Remediation – XSS Reflected

This vulnerability occurs when you accept user input and then use that input in output. The solution is to sanitize the input. The filter_var or filter_input functions can be used with a variety of sanitize filters.

As an example, code to accept what should be an integer value from user input:

     $iVariable = $_POST['someUserInput'];

Becomes code that removes all characters except for digits (and + or – signs) from the input:

     $iVariable = filter_input(INPUT_POST, 'someUserInput', FILTER_SANITIZE_NUMBER_INT);

Fortify on Demand Remediation – Introduction

The company for which I work signed a contract with some vendor for cloud-based static code analysis. We ran our biggest project through it and saw just shy of ten thousand vulnerabilities. Now … when an application sits out on the Internet, I get that a million people are going to try to exploit whatever they can in order to compromise your site. When the app is only available internally? I fully support firing anyone who plays hacker against their employer’s tools. When a tool is an automation that no one can access outside of the local host? Lazy, insecure code isn’t anywhere near the same problem it is for user-accessible sites. But the policy is the policy, so any code that gets deployed needs to pass the scan — which means no vulnerabilities identified.

Some vulnerabilities have obvious solutions — SQL injection is one. It’s a commonly known problem — a techy joke is that you’ll name your kid “SomeName’;DROP TABLE STUDENTS; … and most database platforms support parameterized statements to mitigate the vulnerability.

Some vulnerabilities are really a “don’t do that!” problem — as an example, we were updating the server and had a page with info(); on it. Don’t do that! I had some error_log lines that output user info that would be called when the process failed (“Failed to add ecckt $iCircuitID to work order $iWorkOrderID for user $strUserID with $curlError from the web server and $curlRepsonse from the web service”). I liked having the log in place so, when a user rang up with a problem, I had the info available to see what went wrong. The expedient thing to do here, though, was just comment those error_log lines out. I can uncomment the line and have the user try it again. Then checkout back to the commented out iteration of the file when we’re done troubleshooting.

Some, though … static code analysis tools don’t always understand that a problem is sorted when the solution doesn’t match one of their list of ‘approved’ methods. I liken this to early MS MCSE tests — there was a pseudo-GUI that asked you to share out a printer from a server. You had to click the exact right series of places in the pseudo-GUI to answer the question correctly. Shortcut keys were not implemented. Command line solutions were wrong.

So I’ve started documenting the solutions we find that pass the Fortify on Demand scan for everything identified in our scans — hopefully letting the next teams that use the static scanner avoid the trial-and-error we’ve gone through to find an acceptable solution.

Git – Removing Confidential Info From History

The first cut of code may contain … not best practice code. Sometimes this is just hard coding something you’ll want to compute / look up in the future. Hard coding user input isn’t a problem if my first cut is always searching for ABC123. Hard coding the system creds? Not good. You sort that before you actually deploy the code. But some old iteration of the file has MyP2s5w0rD sitting right there in plain text. That’s bad in a system that maintains file history! The quick/easy way to clean up passwords stashed within the history is to download the BFG JAR file.

For this test, I created a new repository in .\source then created three clones of the repo (.\clone1, .\clone2, and .\clone3). In each cloneX copy, I created a tX folder that has a file named ldapAuthTest.py — a file that contains a statically assigned password as

strSystemAccountPass = "MyP2s5w0rD"

The first thing I did was to redact the password in the files — this means anyone looking at HEAD won’t see the password. Source, clone1, and clone2 are all current. The clone3 copy has pulled all changes but has a local change committed but not merged.

To clean the password from the git history, first create a backup of your repo (just in case!). Then mirror the repo to work on it

mkdir mirror
cd mirror
git clone --mirror d:\git\testFilterBranch\source

 

Create file .\replacements.txt with the string to be redacted — in this case:

strSystemAccountPass = "MyP2s5w0rD"

Formatting notes for replacements.txt

MyP2s5w0rD # Replaces string with default ***REMOVED***
MyP2s4w0rD==>REDACTED # Replaces string using custom string REDACTED
MyP2s3w0rD==> # Replaces string with null -- i.e. removes the string
regex:strSystemAccountPass\s?=\s?"MyP2s2w0rD""==>REDACTED # Uses a regex match -- in this case we may or may not have a space around the equal sign

So, in my mirror folder, I have the replacement.txt file which defines which strings are replaced. I have a folder that contains the mirror of my repo.

lisa@FLEX3 /cygdrive/d/git/testFilterBranch/mirror
$ ls
replacements.txt source.git

To replace my “stuff”, run bfg using the –replace-text option. Because I only want to replace the text in files named ldapAuthTest.py, I also added the -fi option

java -jar ../bfg-1.14.0.jar --replace-text ..\replacements.txt -fi ldapAuthTest.py source.git

 

lisa@FLEX3 /cygdrive/d/git/testFilterBranch/mirror
$ java -jar ../bfg-1.14.0.jar --replace-text replacements.txt -fi ldapAuthTest.py source.git

Using repo : D:\git\testFilterBranch\mirror\source.git

Found 3 objects to protect
Found 2 commit-pointing refs : HEAD, refs/heads/master

Protected commits
-----------------
These are your protected commits, and so their contents will NOT be altered:
* commit 87f1b398 (protected by 'HEAD')

Cleaning
--------
Found 5 commits
Cleaning commits: 100% (5/5)
Cleaning commits completed in 613 ms.

Updating 1 Ref
--------------

Ref Before After
---------------------------------------
refs/heads/master | 87f1b398 | 919c8f0f

Updating references: 100% (1/1)
...Ref update completed in 151 ms.

Commit Tree-Dirt History
------------------------

Earliest Latest
| |
. D D D m

D = dirty commits (file tree fixed)
m = modified commits (commit message or parents changed)
. = clean commits (no changes to file tree)

Before After
-------------------------------------------
First modified commit | dc2cd935 | 8764f6f1
Last dirty commit | 9665c4e0 | ccdf0359

Changed files
-------------

Filename Before & After
-------------------------------------
ldapAuthTest.py | 25e79fa6 ? 4d12fdad

In total, 8 object ids were changed. Full details are logged here:
D:\git\testFilterBranch\mirror\source.git.bfg-report\2021-06-23\12-50-00

BFG run is complete! When ready, run: git reflog expire --expire=now --all && git gc --prune=now --aggressive

Check to make sure nothing looks abjectly wrong. Assuming the repo is sound, we’re ready to clean up and push these changes.

cd source.git

git reflog expire --expire=now --all && git gc --prune=now --aggressive
git push

 

Pulling the update from my source repo, I have merge conflicts

These are readily resolved and the source repo can be merged into my local copy.

And the change I had committed but not pushed is still there.

Pushing that change produces no errors

Now … pushing the bfg changes may not work. In my case, the real repo has a bunch of branchs and I am getting “non fast-forward merges”. To get the history changed, I need to do a force push. Not so good for the other developers! In that case, everyone should get their changes committed and pushed. The servers should be checked to ensure they are up to date. Then the force push can be done and everyone can pull the new “good” data (which, really, shouldn’t differ from the old data … it’s just the history that is being tweaked).

Proof of Concept

Reading about the meat processing that’s been attacked by ransomware, and thinking about the petrol pipeline … this really seems like proof of concept stuff to me. I’m sure there’s some ‘making money’ and more than a little ego stroking involved. Before we purchase and implement some major system at work (or spend a lot of time developing code), we run a proof of concept test. A quick, slimmed down implementation that runs on some virtual system that lets people see how it’ll work without sinking the time and money into a full-scale implementation. If the thing seems useful, then we buy it and have a capital budget for implementation. If it wasn’t useful … well, we lost some time, but not much.

Attacking small players in various industries to see what kind of impact you have have … seems a lot like a proof of concept series of attacks. How well secured was the company? What kind of incident response were they able to mount? How much access did you manage? What came offline? What was the public impact?

Blocking Device Internet Access

We block Internet access for a lot of our smart devices. All of our control is done through the local server; and, short of updating firmware, the devices have no need to be chatting with the Internet. Unfortunately, our DSL modem/router does not have any sort of parental control, blocking, or filtering features. Fortunately, ISC DHCPD allows you to define per-host options. Setting the router to the device’s IP (0.0.0.0 may work as well) allows us to have devices that can communicate with anything on their subnet without allowing access out to other subnets or the Internet.

SolarWinds Attack and Access to MS Source Code

Reading Microsoft’s publication about their impact from the SolarWinds hack, I see the potential for additional (unknown) attack vectors. Quoted from MS:

“We detected unusual activity with a small number of internal accounts and upon review, we discovered one account had been used to view source code in a number of source code repositories. The account did not have permissions to modify any code or engineering systems and our investigation further confirmed no changes were made. These accounts were investigated and remediated.”

 

While the potential for attackers to have read something that provides them some sort of insight is obvious, the less obvious scenario would be the SolarWinds attack having obtained credentials with write access elsewhere. Worst case, even inserting another attack vector as was done in the SolarWinds attack. That’s a good reason to establish firewalls with least-required access (i.e. nothing can get to any destination on any port unless there’s a good reason for that access) instead of the internally wide open connectivity that I’ve seen as the norm (even in places with firewall rules defined, I’ve seen servers where either everything is allowed or low ports are blocked but >1024 is opened).

 

 

List Extensions Within Folder

It didn’t occur to me that Apache serves everything under a folder and the .git folder may well be under a folder (you can have your project up a level so there’s a single folder at the root of the project & that folder is DocumentRoot for the web site). Without knowing specific file names, you cannot get anything since directory browsing is disabled. But git has a well-known structure so browsing to /.git/index or really scary for someone who stuffs their password in the repo URL /.git/config is there and Apache happily serves it unless you’ve provided instructions otherwise.

A coworker brought up the intriguing idea of, instead of blocking the .git folder so things subordinate to .git are never served, having a specific list of known good extensions the web server was willing to serve. Which … ironically was one of the things I really didn’t like about IIS. Kind of like the extra frustration of driving behind someone who is going the speed limit. Frustrating because I want to go faster, extra frustrating because they aren’t actually wrong.

But configuring a list of good-to-serve extensions means you’ve got to get a handle on what extensions are on your server in the first place. This command provides a list of extensions and a count per extension (so you can easily identify one-offs that may not be needed):

find /path/to/search/ -type f | perl -ne 'print $1 if m/\.([^.\/]+)$/' | sort | uniq -c