Like any organized activity, InfoSec has its share of myths. These things are like “idea zombies” because they just don’t die. Here are a few of the more pernicious ones I’ve encountered:
Likelihood is an excellent consideration when discussing naturally-occurring events. That’s why likelihood crops up so much in the Quality Assurance world: QA can bring statistics to bear against the likelihood of some specific action occurring, such as rust oxidizing a bridge. This can be mathematically quantified as the probability of the bridge failing during a given period of time.
But what if the rust became sentient? Decided to focus its combined oxidation on to one specific girder or bolt in the bridge? What happens to those nice mathematic models of failure? Does likelihood still work in this case? The answer is emphatically “No”. Where there is malicious intent you cannot assign a meaningful “likelihood” value. It literally has no value in this domain, yet it is the go-to topic at meeting after meeting.
I have heard people claim that “Likelihood” is the odds of an attacker finding a given vulnerability. How exactly does this guessing game help us score vulnerability and mitigate it??
The other problem with Likelihood is that it is a convenient place to “game” the scoring of vulnerabilities. If your vulnerability scoring system includes likelihood and you’re tempted or pressured to overlook certain vulnerabilities (because of laziness, busyness, lack of insight / resources / management support, etc.), arbitrarily setting the likelihood to “rare” will most likely bias the final score so you can ignore it.
I once had a client’s otherwise-intelligent QA representative tell me that likelihood was “rare” because it had never been attacked in 15 years of existence. As a result, the overall scoring of the currently underway attack was low and could be ignored! By that logic, we would ignore all first-time, exploit-based attacks!
Wheel-spinning and speculating about “likelihood” only derails the security process. Explain its irrelevance (if you have to) and move on.
Attacker Skill Level
If you were designing an embedded system to be secure against one particular person, say my brother Steve, then his skill as a hacker would be worth trying to determine. Steve is nice enough in his own way, but doesn’t really understand computers. It would be easy to implement security mitigations to prevent Steve from successfully attacking your embedded system.
But if we want this product to go out to the entire world (or even just one country), the hacking skill level of Steve doesn’t mean anything. In fact it would be the combined maximum skill level of everyone in the world. Because we live in a world interconnected via the internet, what is today a ‘nation-state’ level attack will be downloadable to every hacker in a matter of days. There is constant activity of democratizing hacks by placing them in easily accessible places (such as Github) and including them in pen testing frameworks (think: “metasploit”) so that anyone can easily replicate the hack.
As a result, the “attacker skill level” just doesn’t matter to security of your embedded system. If it can be hacked, it will be hacked.
“Money? Challenge? Fame? Activism? Revenge? Murder? Blackmail? Fun? Competitor?” But in the end, do you care? Not really! The only (very slight) consideration in this area is that any vulnerability that can be easily monetized will be the first thing in the system to be attacked.
My favorite victim’s lament is “Why are they attacking us? We are good people!!” The attacker doesn’t care about you - will never care about you - so you should not care about them.
I can’t turn a Kia into a Formula 1 race car. Likewise, you cannot somehow magically bolt on a security solution to existing legacy products to make them secure. Yet there are organizations who will sell you all kinds of tools that claim to accomplish this.
The only answer to legacy products is to dispose of them properly and buy new, secure products. This will encourage manufacturers to create secure products in the first place. Consumer Reports is going to start including security in their product ratings, so that may help the consumer awareness of a device’s level of security (or lack thereof!) as well.
Don’t allow yourself to get into a conversation about securing legacy devices; it will never end!
Some groups think that any security assessment should be performed by the end user... people skilled in their day-to-day job, but typically not skilled in embedded device security.
Environment does matter for security, but the impact of specific vulnerabilities in specific use cases can only be assessed by people with all the information about the design and how it was implemented. This is usually manufacturers, third-party contract developers, and (less frequently) expert consultants who enter a product development effort already underway. All third parties must come up to speed and have all the knowledge a start-to-finish developer on the project would have. Only then can a third party, even an expert security consultant, assess the impact of vulnerabilities upon specific use cases.
There is absolutely no way anyone can make these assessments without "developer-level knowledge" that is project-specific, plus broad knowledge of embedded device security best practices and threats. Assuming that a normal end user has any awareness of the state of security threats and how that could affect a given product in a given use case is simply foolish.
But "experts" still claim that environmental testing is the holy grail of security assessment. It's not. It wastes tons of time for a result that is ill-informed and poorly structured for designing mitigation.
The Next Release
This one really isn’t unique to the domain of InfoSec. It is more a function of Corporate America and management’s perception that the project schedule is more important than any one feature, so that any feature which threatens to delay the schedule gets pushed back until the “next release.”
In my decades of developing new products, I have never seen “that feature” ever get added into the product post release, not in the “next release” or any subsequent release.
When it is just a feature, that may not be too bad (the product just might not sell as well as it could have); but when the “feature” is a “security mitigation” that doesn’t get implemented it can be a disaster.
This is also why I dislike the “best practice” currently trending in Penetration Testing (see below).
It’s in binary!
I have lost track of how many firmware developers have stated to me (like it was obvious. Duh!) that once their C code is compiled to binary it can’t be reverse engineered! This happened as recently as a week ago.
There are many tools available commercially and as freeware that can disassemble your binary back into assembly code and even decompile, turning your binary back into decent C code in seconds. There are some tools available to help obfuscate the code, but they are just security “speed bumps” that at best just slows the attacker down, but certainly doesn’t stop them (I’m not saying don’t use them, I am saying don’t pay a lot of money for them).
There you are, 18 months into your product realization lifecycle, about to go to Verification + Validation or maybe even release your shiny new product to Production. You’re on schedule (or close enough that your boss isn’t upset), that list of 6 possible exploits that your lead software developer (he’s an expert in InfoSec, right?) put together was included in the requirements, so you “know” your product is secure, but just to be certain you contract a third-party penetration tester to attack your new embedded system.
A couple weeks later you get the bad news, in the form of a lovely report from your penetration tester, detailing all the trivial ways your new product can be easily hacked and the disastrous consequences.
Do you blow the schedule and budget out of the water, stop the transition to Production, redesign, reimplement and fix all of these vulnerabilities discovered during Pen testing?
Do you ship the product, ignoring all of the now “known vulnerabilities” in your not-so-shiny new product and hope that it is never hacked?
I could tell you what 99% of organizations do, but I bet you already know the answer!
Penetration testing is good, but it’s not generally being utilized well. There are better ways to accomplish the same thing… but that is a topic for a different blog post.