- “Companies spend millions of dollars on firewalls, encryption, and secure access devices and it’s money wasted because none of these measures address the weakest link in the security chain: the people who use, administer, operate and account for computer systems that contain protected information.” – Kevin Mitnick, March 2, 2000, in front of the U.S. Senate Committee on Governmental Affairs
The beginning of 2016 has proven to be a couple of very eventful months for those involved in, or even just those who follow, the world of computer and system security. And, while everyone in the field is familiar with the sentiment (if not the actual quote) above, the past couple of months has been marked by a slew of issues from a slightly different vector: instead of the users, it’s the developers who are proving to be the “weakest link” in the security chain. This month has seen disclosures of hard-coded passwords from Fortinet, Cisco, and a pair of issues from Juniper, of which one was probably placed by malicious hackers into the product while the other has been suggested to have been a backdoor planted deliberated by the company, possibly at the request of a government agency. It’s important to note that these weren’t just “bugs” or “errors” (though some of those were involved) – in each of these cases, a weakness was deliberately introduced into the product. Maybe it was to aid in remote customer support, or maybe it was added to help fight terrorists. But either way, these things were added by companies that IT professionals need to trust to do the right thing, and those companies all violated that trust.
Of course, the latest incident has gotten significantly less press than these previous ones. On 15 January 2016, Rapid7 made a public disclosure of CVE-2015-7938, which is focused on a bug but also mentions some suspicious code for a hard-coded password, likely to assist in remote debugging. The underlying issue might have been the same as these other incidents, but now the affected product wasn’t a router or firewall, but an industrial control system (the product is a TCP/IP to MODBUS gateway). And while fewer people may be interested in such things, the idea of such a vulnerability being deliberately inserted into control systems for who-knows-what should make people think about the major blackout the Ukraine which was caused by infiltration of an industrial control system.
One has to ask, “why do companies do this”? Yes, hard-coded back-door passwords can make some development and testing easier. But is it worth it? The risk seems awfully high. And, unfortunately, the more successful a given product is, the bigger the resulting back-door issue is. At PARPRO Embedded Systems, I’ve had to periodically sign documents for various customers to attest that there are no back-doors or “logic bombs” in our products; at the time it seemed like a silly request, but in today’s world the customers’ concern suddenly seems very reasonable. And, for me, signing such a document is easy – as a matter of policy, we don’t use hard-coded back-door accounts (or back-doors of any type). Not for development, not for test, and not in production. I think more companies should take the same stand.