One of the first things I laid my eyes on when I decided to become a security professional years ago, was Microsoft Technet’s 10 Immutable Laws of Security. I really liked that the team put 10 principles on paper, which were simple, but very valuable. I lived by those rules since, but as the technology evolved, I started to question if those 10 are actually still on point. Immutability is a tricky thing, as even laws of physics start to misbehave for example around black holes. If you think about passwordless authentication, the law 5 kindof loses its meaning:
#5 Weak passwords trump strong security
Many years ago I sat down with a friend (a philosophy doctor, really clever chap) and we tried to put together a new set, comprising of say 10 laws, but that perhaps are more generic and cover greater areas of security. For instance ‘The system is as strong as its weakest link’, covers cryptography, access control, infrastructure and resiliency etc. Modern security teams utilise a process called ‘Threat modelling’, where they… find the weakest links of the applications and systems. It made sense that the law was relevant for the use case. But on the other hand, some things need to be called out specifically, like encryption, usage of weak passwords etc. The more generic we got, the less relevant and clear it seemed for IT and specifically, for security. And after all, Microsoft has done a good job and called out relevant pieces. A while ago, I published a blog about a breach, where I ‘revisited’ law number 5, specifically because it was close to my heart – being part of identity and access management community. That brought an idea to just revisit the laws and keep the general gist of it, but make it more relevant for 2022. This is a request for comment. I encourage everybody to get in touch to discuss and I will attribute any collaborators. If you think we need another law, relevant enough to call out or join some of them, or break some down – there’s an opportunity there. Leaving law number 2 blank for inspiration.
10 Mutable Laws of Security AD 2022
- Law #1: If a bad actor can run their code on your device it’s not your device anymore
- Law #2:
- Law #3: If you don’t encrypt your data in transit and at rest, it’s likely to leak
- Law #4: Anomaly detection is as important as signature scanning
- Law #5: Weak identity trumps strong security
- Law #6: Getting hacked may be beyond control, but not knowing about it is lack of control
- Law #7: Encrypted data is only as secure as the decryption key and the encryption implementation
- Law #8: An out of date software is a vulnerability
- Law #9: Absolute anonymity all the time isn’t safe or practical for anyone
- Law #10: Trust nothing without verification
Law #1: If a bad person can run their code on your device it’s not your device anymore
Consolidating a few here (1,2 and 4 of the originally posted laws). The essence of this is, that regardless of what we are talking about in terms of the device – a laptop, server, website, mobile phone, client site, server side… If someone manages to run their code without your consent in your environment (whether you own the infrastructure behind it or not), you have a problem. The attack vectors are different and this may be a phishing attack, exploiting a vulnerability, using a none-genuine operating system from a warez website or simply swapping your mobile charging cable with a malicious one. This may lead to various effects, like data disclosure, denial of service or ransom.
Law #3: If you don’t encrypt your data in transit and at rest, it’s likely to leak
The original law says ‘If a bad guy has unrestricted physical access to your computer, it’s not your computer anymore’. The narrative was around Denial of Service, stealing hard disks, planting keyloggers etc. In my eyes it lost its value to a degree. If you think about MDM software (Mobile Device Management), which encrypts the mobile phone data and hardens the device in order to prepare for a situation the device is lost, the original law loses its sense. In a zero trust model, physical access is OK, while maintaining good security posture. Are we saying that all computers in libraries and other public places are not yours? If you run your app in AWS, does it mean it’s not yours? Of course not, there are controls that protect organisations in such scenarios. What is the essence of that law then? Two major outcomes really – Denial of Service and data theft. I love how Microsoft explained one of the DoS scenarios:
He could mount the ultimate low-tech denial of service attack, and smash your computer with a sledgehammer.
Since the physical denial of service attack is a relatively rare beast, let’s focus on the data leak. Protecting digital resources from third party, unsupervised access has one thing in common. Encryption. When the Internet Protocol version 4 (IPv4) was originally designed, no one thought about security. It soon became obvious that it was just not enough. IPv6 has security built in, but in the meantime we got used to other developments in security – IPSEC and the most famous SSL/TLS that dare I say all of us use on daily basis. Unfortunately, encrypting data in transit wasn’t enough either. With the evolution of public cloud, we needed to encrypt the data at rest, too. What happens if someone steals my hard drive or mobile phone? Not much really, at least nothing that warrants a sleepless night. Pardon my easy going approach to life, but if it’s beyond your risk appetite, get an insurance.
Law #4: Anomaly detection is as important as signature scanning
This seems a new law, but let me tell you, it actually isn’t. This is what I call the ‘essence reshuffling’, but let me explain. Instead of focusing on out of date virus definitions (‘An out of date virus scanner is only marginally better than no virus scanner at all’), which clearly is too narrow these days – let’s keep everything up-to date (vulnerability management). The antivirus software in isolation is not such a big part of the picture as it was 20 years ago. I moved this to another law (8) and managed to somewhat balance generic statement with the relevance to IT security. However, when Microsoft wrote this originally, I think they meant – ‘detect bad code’. In the old days malicious code (I am deliberately not using the term virus anymore, since these days we have more than just viruses – malware, trojans, ransomware etc). Originally we just compared the signature of the code to detect that it’s malicious. We knew what the code looked like, so it was easy. Once viruses evolved and started to mutate (amazing really, isn’t it), they had the ability to change the way they looked like from the code perspective, effectively rendering a different picture of itself, which led to the need of detecting it with a different signature for that particular mutation. Add encryption to this picture and it’s really hard to take a closer look at what those applications do. They started to behave like ordinary software, pretending to be (or reach) web applications while allowing for remote control of the resources, hiding even from the firewalls. Heuristics analysis changed the way we looked at malicious code, as we started to look for suspicious properties rather than the exact signature. These days we have the ability to detect mutations or viruses we didn’t know about. Not all of them, of course.
Secondly the original laws said “A computer is only as secure as the administrator is trustworthy”, which again in the era of zero trust, it kind of lost its relevance. I know, I know… Edward Snowden. But let me tell you this – if I compromise your administrator’s laptop, it doesn’t matter if she/he/them is/are pure of heart qilin like creature… it’s not your computer anymore. Besides we have to assume anyone can become rogue, with or without bad intentions. Not everyone will have a price to turn them against the organisation, but most will submit to a legitimate threat, against their reputation or even life or the lives of their family. We just have to factor that in. It doesn’t mean we don’t want to screen our employees, but it it’s just not enough. That’s where the anomaly, risk based detection comes in. If some subjects or objects behave out of ordinary, we should act on it. For example, if I log into a corporate system Mon-Fri around 9am, and suddenly someone logs into the same website on Tuesday, five past midnight (using correct credentials), that should raise a red flag. If a component of my laptop suddenly tries to dump all accounts associated with the operating system and it was run interactively, it should ring an alarm bell in your SIEM solution (Security Information and Event Management).
Law #5: Weak identity trumps strong security
Ok, this is one of the variations of ‘The system is as strong as its weakest link’, but worth calling out specifically. It’s an edited one, since the original said ‘weak passwords’. Let me share one of my thought beasts with you. I come across many professionals daily and I can’t help but notice that identity is treated somewhat like a little brother of security. When we say security, we mean penetration testing, risk, vulnerability management etc. But the fact is, identity is one of the major pillars of security and a very important domain. It doesn’t matter how strong your house is, with 20 inch walls, steel reinforced doors, if… you use a padlock. And with the passwords going away in near future, the law lost its sense. Identity and Access Management has developed greatly since and I feel that changing one word in the original law, breathes another 20 years into its life (maybe even more!).
Law #6: Getting hacked may be beyond control, but not knowing about it is lack of control
This one is an original. Over the years, I realised that, no matter what we do, no matter how diligent we are or how much money we spend on security, there’s ALWAYS the possibility of getting hacked. Let’s look at zero-day exploits. We may not know that one of our applications is vulnerable and those who know are not the nice types who decide to share the knowledge. They walk the path of a cyber-criminal or… how do I put it into words nicely… state sponsored IT professional? (the ethical line is very thin here). It’s not a secret that intelligence agencies around the world are in possession of tools that allow them to breach the security of some systems. Not only isn’t it a secret, some of the tools are (not quite commercially) available and we know what they are, for example NSO Group’s Pegasus. This way or another, the nature of the game is that hackers are always one step in front of security professionals and they will be for a foreseeable future to come. While we work towards risk based or anomaly detection, security is still mostly reactive. Of course it’s not OK to be negligent, we should always try to remove or mitigate risks, change the surface area of the attack, but we cannot make it a 100%, sure thing game. Long story short, you cannot control whether you will be breached or not, you can only do your best. The really important bit is the detection and response. It’s NOT OK to let the breach convert into an APT (Advanced Persistent Threat). It means: ‘You’ve been hacked and you don’t know about it for a long time’. I always use Walmart’s example, where they were compromised for 8 months. It’s an old story (you can read about it here) but for me it’s important, because it highlights how old the problem is.
Law #7: Encrypted data is only as secure as the decryption key and the encryption implementation
One more edit only, as originally they solely mentioned the decryption key. Interestingly this is yet another example of a worthy call out of the ‘The system is as strong as its weakest link’. Well, the protection of the key is not the only weakest link of the encryption malarkey. It’s also how the encryption itself is done. To prove the significance, let me use the infamous Wi-Fi encryption protocol WEP (Wired Equivalent Privacy). It’s actually fairly easy to crack (what a revelation…) and not through a classic brute force when you try all the combinations of the passphrase alone. Neither WEP’s cryptography (in principle) is bad, it’s actually ok. The way the WEP was implemented was the problem, as it led to predictability through reusability of its IV (Initiation Vector). You can read a bit more here, if you’re really keen. You can take a really good concept and implement it badly – it’s more common than we think. Hence the edit.
Law #8: An out of date software is a vulnerability
Ok, so this is taking the old law and making it a bit more generic, with a slight twist. It said ‘An out of date virus scanner is only marginally better than no virus scanner at all’. Since we focused on antivirus and in particular malicious code detection in revised law number 4, we can transform this one into far more relevant for security these days. The more devices, the more code, the more code, the more vulnerabilities. That’s a fact. And comparing 1990’s, or even 2000’s to AD 2022, the difference is staggering. Vulnerability Management is the continuous/cyclic process of managing the software and making sure it’s up to date. In the past it took more time to observe attacks from the moment a particular hole was discovered, these days it happens much quicker. Critical bugs/vulnerabilities need to be patched as a priority and some organisations will scramble and work through the nights and weekends to get it done promptly. One of the most famous and recent was Java’s Log4J, you can read about it here. While the law is a simplification to a degree, the bottom line remains the same. If you run out of date software, you a likely to be hacked and you just leave your door open to the attackers. There’s many out there, looking for an opportunity through manual and automated tools.
Law #9: Absolute anonymity all the time isn’t safe or practical for anyone
I actually came across an issue which would be a great case-study, with one of large publishing companies. Unfortunately I cannot dive into too much detail, but what I can say from experience is that absolute anonymity is a number one enemy of customer experience. Not only we don’t know anything about our consumers, but we also cannot protect the revenue properly. Customers will move over to competitors if they feel that they are looked after better. Transparency is key and what helps to tame the most demanding use cases for anonymity. That somewhat covers the practicality bit, let’s have a look at safety. Majority of population don’t care if they are monitored by the authorities for their own good. For example, we have our suitcases X-rayed at the airports and no one says anything. In fact we are being X-rayed these days (though it’s not mandatory and there is an alternative). We all want to live in a safe world, where not just anyone can look for ‘how to make a bomb’ articles on the Internet and put the knowledge into practice. The more absolute anonymity, the less safety around it. It’s all about the balance, but ‘absolutes’ anywhere aren’t usually good things.
Law #10: Trust nothing without verification
While the concept of zero-trust was first attributed to Stephen Paul Marsh in 1994 and used fairly widely by military and special services, it was over the last few years that we all heard about it. The definitions vary and trust me, zero-trust has many many components, in the context of identity it means that everyone undergoes the same authentication and authorization journey regardless of where they are coming from. We trust nobody unless we verify they are who they say they are. It really become popular because of the COVID-19 pandemic. We had to start working from home, away from offices and perimeter-based security. I am not scanning my card to get into my study in the morning, like in the office building, therefore how can I be trusted? The concept has an application way beyond working from home use case, for example one may argue that each email we send should be checked if we really authored it and if it doesn’t contain something sinister, sent by a compromised machine instead. In modern architectures there should not exist flows (whether human or machine initiated) that are not authenticated and validated. It also goes for all of us mere mortals at home, away from work. You need to do you own diligence when you open a suspicious email that says, you are about to be sent to prison if you don’t click on a link attached below.