Network Patching and Code SigningBrian Toevs
The recent Equifax breach has prompted a great deal of technical chatter regarding patching our systems. The staff in your IT department are intimately familiar with patch management regimens. There are legitimate reasons to delay patching (the Equifax incident was not supported by a legitimate reason not to apply the patch in question). In this blog segment, I’m going to present what the issues are to the non-technical reader so that you can better understand what is involved in this process.
An abundance of older and unpatched software has been identified as a cause of vulnerabilities exploited across our InfraGard sectors and not simply the Equifax incident, but this is a perfect example of a Financial Sector vulnerability. In enterprise networks, the patch release has been widely publicized as a preferred method for addressing these vulnerabilities. In our networks, vulnerabilities are typically addressed by segmenting the network. I’m not going to completely dispel this notion, but I am going to talk about some of the pros and cons of this approach and why this isn’t the demon that everyone makes it out to be. There’s a lot more to it than simply telling your IT guys that they need to keep up with their software patching.
What do we mean when we say ‘patching’ and ‘code signing’?
Patching is applying a modified version of an existing application or application component (executable) in order to address some vulnerability or deficiency in that executable. This is rather like tuck-pointing bricks in a wall. If a damaged or faulty brick is identified in a wall, a mason will remove that brick and replace it with a new one. It isn’t necessary to tear down and completely rebuild the entire wall. Just replace the bad brick.
Code signing is a method of digitally tagging executables through cryptographic hash in order to warrant that the contents of the executable have not been altered since the code sign was applied. To carry the prior analogy a bit further, this would be a process for color and texture matching to insure that the brick you’re using to replace the faulty brick is from the same manufacturer, lot number, and kiln. In other words, verifying that the replacement isn’t a cheap substitute from China.
Why are patches released?
Patches are not published by the software vendor simply because a flaw has been identified in the code. There is a balance that is considered by the vendor between liability and the release of patches by the vendor. There’s little argument from the user community that the quality of product released by the software industry is sub-optimal. Software manufactures are simply not held to the same level of accountability / liability for defects as other industry sectors such as manufacturing (recalls on automobiles … when was the last time you heard of a software recall?). Recalls and patching is considered ex-post treatment of product quality. Let’s get the software out there and worry about the details later.
The public nature of disclosing new vulnerabilities has made those vulnerabilities even more dangerous. There is a constant conflict between the people that identify the vulnerabilities in software and the manufactures who don’t want them to advertise these faults. The process is supposed to be that the researchers identify the flaws, inform the manufactures, who then create corrective software and distribute it to all of their customers. Sometimes the lag between the researchers finding the vulnerability and the manufactures fixing them is longer than the researchers are willing to wait. Thus the conflict. The researchers want the credit for the find, and they don’t want the bad guys to find the flaws so they want the manufacturers to act quickly. The manufacturers would prefer to focus on new features and capability development than fixing flaws in the ‘old’ system. Sometimes this is done with complete professionalism and collegiality. Sometimes it is not. The Google Zero-Day Initiative where the folks at Google are publishing everyone else’s vulnerabilities, but not they’re own.
The ‘old’ Microsoft patch philosophy
“We’ll ship it on Tuesday and get it right by version 3” – Bill Gates (Anderson, 2001). This quote is based upon an infamous email sent by Mr. Gates back in 1997 (full disclosure – I couldn’t locate the original citation). There has been a pervasive attitude within the software industry to release software as soon as possible with corrections made via the patching process. The author notes that even in 2001, Microsoft leadership had already changed their practices to actively pursue a policy of security over features. This is evidenced by the free distribution of Microsoft Security Essentials (aka Windows Defender) that has consistently sat in the top first or second position of Windows-based anti-malware software available. However, not every vendor out there has embraced security in such a way. Many vendors (not simply financial sector vendors) still rely upon segmenting their networks for addressing security concerns. Other vendors are actively attempting to close holes in their systems. This is accomplished through updates or patches to the existing software as well hardening their systems as they’re sold to new customers. Sometimes the changes are so significant that entire new versions of the software are required.
Categories of vulnerabilities
Figure 1. Categories of Vulnerabilities (Henrie, 2013)
This chart is important and included to provide you with an understanding of where the threat to your network is coming from. Particularly the types of threats that could be considered “patchable” by one of your software vendors. Not simply the financial sector software vendor, but any of the software that could be included on your network including the operating system itself (and we’re back to Microsoft Patch Tuesday).
Some of these are not generally addressed by patching though such as:
- Permissions, privileges, and access control – managed by your sysadmin with proper group management.
- Security configuration and maintenance – unless your application software contains its own access module integrated into the system and is independent of the operating system (no single signon).
- Credentials management – Same thing.
Traditional functions of Information Assurance
Confidentiality Integrity Availability
- Confidentiality – you want to make sure that the contents of your system are secure for prying eyes. As described earlier, one of the most vigorous attackers out there are nation states that are looking for competitive intelligence for their own industries. Most can’t buy your knowledge-base even if they could and even if you would sell it to them.
- Integrity – The system needs to be reliable in that it will do what you expect it to do every time. Compromised systems can’t be relied upon for that.
- Availability – this is perhaps the most important aspect of the CIA triad as it pertains to ICS patching. For most ICS networks, it can be catastrophic if the system become unavailable for any noticeable length of time. So the system must be available if you patch AND it must remain available if you decide not to patch.
Not all customers are susceptible to the same loss from the same attack. If it were, then the consequences of a breach would be a simply matter to quantify. Instead, it is necessary for each customer to determine their own level of Risk. Risk can be assessed in various ways. For this post, I am offering the following simplified algorithm for the application of Risk in an financial sector network.
Risk = Threat * Vulnerability * Consequences
Risk – Impact to the organization
- Threat – internal or external agents intended to disrupt or cause harm to the organization. Misinformation about the real threat to an organization through its network makes this element of the equation difficult to quantify.
- Vulnerability – a weakness in the system that can be exploited. Software is typically considered an ‘experience good’, whose quality is difficult to predict before experiencing them. This makes the vulnerability difficult to discern prior to installing the patch.
- Consequences – result on the system if the threat has successfully exploited a vulnerability. In this case, ‘proximal liability’ is assigned based upon each participating party’s proximity to the fault that caused the harm. Liability should be assigned to the party that can do the best job of managing risk. This is most often the software manufacturer as the party who has the ability to understand vulnerabilities, predict possible damages due to defects, and knows how to fix them.
What is the threat to your financial sector systems?
- Recon for cyber warfare – malevolent actors are out there doing network scans to determine the patch level of an organization in order to identify the existence of known vulnerabilities on networks. These are sometimes even published on websites such as shodanhq.
- Hacktivism – Not simply defacement of your website anymore. The Op Abibil / Al Qassam Cyber Fighters attack on the financial sector would be an example.
- Industrial espionage (government / military sector) – in these cases there are established examples of cases where state-sponsored organizations have taken advantage of unpatched systems in order exploit known vulnerabilities.
- Industrial espionage – just like regular state-sponsored espionage, except that the state attackers are sharing their gathered intelligence for competitive advantage to their country’s industries.
- Competitive intelligence – foreign companies, not just those with state ties, are looking for ways to get information from your networks. Not only customer PII and business strategy, but also marketing plans, corporate structure, schedules, etc.
- Insider threat – code signing here (perhaps even unintentional on the part of the employee) the case of the employee taking a USB drive given away at a conference with unsigned software infected with malware. Another may be unsuspectingly creating mule accounts for money launders.
iSight Partners, Inc. has evidence that the Russian government is reaching out to the Russian criminal element for help in developing their cyber espionage/warfare capabilities
Commodity versus targeted malware
Most of the threat out there is a commodity threat. The net cast by the malware creators is wide and seeking to exploit vulnerabilities of convenience for general information harvesting, keylogging, bot-net creation, etc. The focus of this blog will refer more to the targeted malware that is financial sector-specific and therefore of particular interest to this audience. This is not to minimize the risk of the commodity threat, but an attempt to focus the discussion so that it is germane to this conference.
Who is responsible within your organization for applying patches and creating the appropriate policies to follow? CISO, CTO, CIO? This makes a difference for taking the responsibility for getting it done. The important thing to remember here is that the responsibility doesn’t stop with ‘that IT tech who forgot to apply the CVE patch’.
Some companies are forcing their customers into upgrades (CERT, 2016). These upgrades are necessarily considered patches because they comprise material changes to the way their software works. The customer if often forced to pay for these upgrades instead of patches that are typically included in either the original cost of the software or part of a maintenance agreement between the vendor and the customers. An example was Microsoft XP. Support (patches) were halted back in 2014 (sort of) and everyone was forced to upgrade.
Some companies are demanding liability clauses in contracts with vendors, holding them responsible for any security breach connected to their software (Kim, et. al. 2010). In these situations, it may be useful and beneficial for you to refer to the CERT Procurement Strategy (CERT, 2016) for guidance in how to negotiate a reasonable agreement with your vendors that will provide you with some coverage in the event of a security breach. Proximal liability provides you with that coverage. End User License Agreements (EULAs) – ever read one? They don’t commit the vendor to do anything and the user is basically on-their-own.
Read the release notes associated with the patch announcement. It is very common for vendors to release fixes to aspects of the software that you are not using, or that would conflict with other software that is installed on your network. The release notes will tell you about these known conflicts (of course not all of them will be known by the vendor, but at least you can avoid the ones they’ve already tested for). It is common for cybersecurity professionals to recommend that you shouldn’t install if you don’t need to. You will have to make the determination as an organization if you want to apply updates & upgrades that provide no material benefit. There are two questions that you should be asking. What would be the downstream effect of passing on updates that build upon one-another. If you don’t have patch C and D, will patch E work?
You should verify that the patch you’re received is actually the patch that you need to apply. There have been cases where malicious actors have injected their malware into what appeared to be a legitimate update to the system. Code signing provides you with a capability to validate that the patch is indeed coming from a trusted source. File hashes, sometimes provided within the release notes, will verify that the files received are not faulty. Testing the release can mitigate the opportunity for misapplying the patch.
Specialized software almost always requires special attention when patching. Fortunately, these releases are often rare and tend to be better tested by the vendor. However, specialized software requires special scrutiny and rigorous testing before being applied. It would be a good idea to have a second copy of critical software installed in a network sandbox where patch could be applied in a test environment that is isolated from your production network. This will also help you to determine what other systems may be impacted by the update.
Potential conflicts with exiting software tend to be the sole responsibility of the consumer. Unless third-party software is required by the vendor (such as operating systems), the vendors want to absolve themselves of responsibility for the impact upon other software and other software’s impact upon their software. The sandbox network will be a help to you in testing for continued compatibility.
Figure 2. How are updates applied on networks? (Gerace & Cavusoglu, 2009)
Look at how many companies are still leaving their automated updates turned on? How can you have control over your system if you’re letting all of your vendors have free access to your network? What is your firewall doing? For your home computer, automated updates to the operating system and the anti-malware system are good things. For your organizational network, not so much. Remember that it is your responsibility to verify the source of the patch, it’s compatibility with your other systems, and the validity / reliability of the vulnerability or capability the patch is meant to correct.
Cost of failure to patch / code sign
The cost of being vulnerable because I didn’t patch includes the following…
- Cost of lost historical customer data
- Corruption or modification of data
- Loss of proprietary information
- Loss of competitive planning information
- Damage to reputation
- System down-time to clean network if infected due to non-patching
The Patch Management Process Do you need to be involved in the patch management / cadence process? The following is a brief outline of the steps that should be involved in getting a patch applied to your network. This may seem like a draconian and almost overwhelming response to what should be a rather simple task. Of course, the alternative is ‘an Equifax moment’.
- Senior executive support – need to recognize the risk to the organization and have to budget for appropriate resources to be applied to the effort. This is the “designated criminal”, also known as Richard Smith.
- Dedicated resources and clearly defined responsibilities – individual staff members should have the dedicated responsibility for insuring that the appropriate patches are properly and consistently applied in a timely manner.
- Creating and maintaining a current technology inventory – this may seem rather basic for this list, but an accurate inventory of hardware and associated installed software is critical to insuring that patches are appropriately distributed throughout the organization.
- Identification of vulnerabilities and patches – the “dedicated ‘human’ resources” listed above should maintain a personal familiarity with new vulnerably as they are identified … especially the zero day vulnerabilities … and what upcoming patch is going to correct the deficiency. Also should be the go-to resource for mitigation from infection from these vulnerabilities.
- Scanning and monitoring the network – this should be an ongoing effort even beyond the inspection of the network for patch-related vulnerabilities, but also log event monitoring, IDS, and configuration of connected devices.
- Pre-deployment testing of patches – testing patches in a controlled environment takes on greater importance for financial sector networks where it’s very likely that the non-financial sector software vendor has made none of their own validation of the interoperability of their patch with the custom financial sector software already installed at your facilities.
- Post-deployment scanning and monitoring – is the network operating in a manner consistent with what was expected once the patch was installed. Also, post-deployment network scanning can be used as an audit tool to verify compliance with defined standards.
There have been sufficient ‘cyber incidents’ in the news recently that should reinforce the criticality of proper patch cadence for your organization. The effort goes far beyond the automated Windows updates that you turn on for your home computer and forget about. This is a critical element of cybersecurity that is often overlooked as a trivial effort often relegated to the most inexperienced IT staffer. It is my intention with this post to education the non-IT professional who has regulatory risk responsibility to understand that this is indeed an element of their job to understand. You are at risk in your organization just as Richard Smith as demonstrated so recently. You can no longer simply relegate it to an IT functionary with little or no oversight. I hope that you have found this, rather lengthy, posting valuable.
Anderson, R. (2001). Why information security is hard – an economic perspective. Proceedings of the 17th Annual Computer Security Applications Conference, New Orleans, LA.
CERT (2016). ICS-CERT Procurement Strategy available from:
Gerace, T. & Cavusoglu, H. (2009). The critical elements of the patch management process. Communications of the ACM, 52(8), 117 – 121. doi: 10.1145/1536616.1536646
Henrie, M. (2013). Cyber Security Risk Management in the SCADA Critical Infrastructure Environment. Engineering Management Journal, 25(2), p. 41.
Kim, B. C., Chen, P.Y., and Mukhopadhyay, T. (2011). The effect of liability and patch release on software security: The monopoly case. Production and Operations Management, 20(4), p.602-617. doi: 10.1111/j.1937-5956.2010.01189.x