IriusRisk Team
|
The Threat Modeling Experts
March 27, 2024

The good, the bad, and the ugly of AI Threat Modeling 

The origins of threat modeling 

No one said security was easy. But, it has had an interesting evolution, and threat modeling is no exception. You could argue it began as new applications and systems were being built and tested, and there wasn’t exactly a process or framework to support it. This is where the original OWASP Testing Guide came from and then subsequently a threat modeling process to better understand that application and the context surrounding it. 

The problem was, threat modeling wasn’t straightforward. And arguably, it still isn’t if you don’t know much about it. Yes, there are Threat Modeling Methodologies to adopt, if you choose, from STRIDE to OCTAVE and PASTA. However, it assumes the security person understands the ever-growing complex amount of threats, how attackers are exploiting them, as well as how the countermeasures work, how to prioritize and even implement them. 

Fortunately, more and more industries are learning the benefits of threat modeling as a process, and with automated tools available (like IriusRisk!), it helps lighten the load to users of threat modeling as a whole. 

Humans vs machines for secure code 

Will machines replace a developer in writing code? We certainly hope not and don’t see this long term. You still require human intervention, context and inputs. But perhaps AI can aid us in developing code faster. We think AI should be viewed as a Co-Pilot to your needs, an enthusiastic assistant. Teams can use resources like GitHub Copilot for example for daily tasks to save time, especially for activities like unit tests. Even so, any code it does bring back, still needs a human to verify it. But you also will always require that human understanding, and that is something we aren’t comfortable handing over to machines, at least not just yet.

AI the Bug-Bounty-Hunter?

Could we utilize artificial intelligence to look for those bugs in our code that we missed? Even better, shall we train AI, to train other AI, on what vulnerabilities to look for? Machines alongside threat modeling are opening many new doors to security decisions. But do we think they will identify weaknesses that other processes may miss?

One tool we were impressed with, is STRIDE GPT, which is an excellent start on beginning to use GPTs for threat modeling. It has empowered non-security users to get involved. However, even using this, the detailed prompts give back relatively generic outputs, which need further verification from someone who speaks the security language. It needs the user to understand the usage and the context to get the most use from it. Without checks, context, validation and so on, you risk getting caught in an echo chamber of information. 

It matters who the user of AI is. So if it's a developer using your AI powered threat modeling tool, it needs to behave differently versus if it's a security professional using your AI powered threat modeling tool, because one person will be able to make a judgment call on whether an output is nonsense, and the other user may struggle with that determination.

AI and the big bad bias 

We all know, AI hallucinates. The problem is, it is so convincing, there are times it is hard to tell or even notice it has done it, as the results have such humanlike conviction. Without realizing, you could be utilizing outputs from AI which are laced with bias, full of hallucinations, and let’s be frank, just completely incorrect. 

We must ensure accountability for what we are using, checking our own prompts for bias, as well as the output from the system. This can be a huge problem for threat modeling, as if you perhaps use code from an LLM for example, which has bias within it, you could be assuming a threat isn’t possible, and missing a major attack vector altogether. Equally, on the other hand, you may assume a certain mitigation is possible - when it doesn’t even exist. Tread carefully and remember to check for biases and assumptions. 

What about ‘Jeff’? 

Who’s Jeff you ask? That would be the name of our in-product AI capability. Missed it? Watch our short demo below to see Jeff in action!

Jeff has plans. We can confirm we will be using API endpoints to allow Jeff to hook into other software, we're not going to be limited to the IriusRisk UI. The goal is to be able to put this wherever you happen to be working, and if you want to take your meeting transcription and automatically feed them into our tool, there'll be an API endpoint to do that. So that's the direction we're heading in. Jeff will be available in both Community Edition (our free-forever version) and Enterprise Edition. Watch out for further news coming soon on our social media accounts…