Discerning reality from the hype around AI


When it comes to artificial intelligence and applying it to software development, it’s hard to discern between the hype and the reality of what can be done with it today.

The presentation of AI in movies makes the technology seem scary and that in the not-too-distant future humans will be slaves to the machines.  But other films show AI being used for all kinds of things that are way in the future – and most likely unreal. The reality, of course, is somewhere in between.

While there is a need to tread carefully into the AI realm, what has been done already, especially in the software life cycle, has shown how helpful it can be. AI is already saving developers from mundane tasks while also serving as a partner – a second set of eyes – to help with coding issues and identifying potential problems.

Kristofer Duer, Lead Cognitive Researcher at HCLSoftware, noted that machine learning and AI isn’t yet as it is seen, for example, in the “Terminator” movies. “It doesn’t have discernment yet, and it doesn’t really understand morality at all,” Duer said. “It doesn’t really understand more than you think it should understand. “What it can do well is pattern matching; it can pluck out the commonalities in collections of data.”

Pros and cons of ChatGPT

Organizations are finding the most interest in generative AI and large language models, where they can absorb data and distill it into human-consumable formats. ChatGPT has perhaps had its tires kicked the most, yielding volumes of information, but which is not always accurate. Duer said he’s thrown security problems at ChatGPT and it has proven it can understand snippets of code that are problematic almost every time. When it comes to “identifying the problem and summarizing what you need to worry about, it’s pretty damn good.”

One thing it doesn’t do well, though, is understand when it’s wrong. Duer said when ChatGPT is wrong, it’s confident about being wrong. ChatGPT “can hallucinate horribly, but it doesn’t have that discernment to understand what it’s saying is absolute drivel. It’s like, ‘Draw me a tank,’ and it’s a cat or something like that, or a tank without a turret. It’s just wildly off. “

Rob Cuddy, Customer Experience Executive at HCLSoftware, added that in a lot of ways, this is like trying to parent a pre-kindergarten child. “If you’ve ever been on a playground with them, or you show them something, or they watch something, and they come up with some conclusion you never expected, and yet they are – to Kris’s point –100% confident in what they’re saying. To me, AI is like that. It’s so dependent on their experience and on the environment and what they’re currently seeing as to the conclusion that they come up with.”

Like any relationship, the one between IT organizations and AI is a matter of trust. You build it to find patterns in data, or ask it to find vulnerabilities in code, and it returns an answer. But is that the correct answer?

Colin Bell, the HCL AppScan CTO at HCLSoftware, said he’s worried about developers becoming over-reliant upon generative AI, as he is seeing a reliance on things like Meta’s Code Llama and Google’s Copilot to develop applications. But those models are only as good as what they have been trained on. “Well, I asked the Gen AI model to generate this bit of code for me, and it came back and I asked it to be secure as well. So it came back with that code. So therefore, I trust it. But should we be trusting it?”

Bell added that now, with AI tools, less-abled developers can create applications by giving the model some specifications and getting back code, and now they think their job for the day is done. “In the past, you would have had to troubleshoot, go through and look at different things” in the code, he said. “So that whole dynamic of what the developer is doing is changing. And I think AI is probably creating more work for application security, because there’s more code getting generated.”

Duer mentioned that despite the advances in AI, it will still err with fixes that could even make security worse. “You can’t just point AI to a repo and say, ‘Go crazy,’ ” he said. “You still need a scanning tool to point you to the X on the map where you need to start looking as a human.” He mentioned that AI in its current state seems to  be correct between 40% and 60% of the time.

Bell also noted the importance of having a human do a level of triage. AI, he said, will make vulnerability assessment more understandable and clear to the analysts sitting in the middle. “If you look at organizations, large financial organizations or organizations that treat their application security seriously, they still want that person in the middle to do that level of triage and audit. It’s just that AI will make that a little bit easier for them.”

Mitigating risks of using AI

Duer said HCLSoftware uses different processes to mitigate the risks of using AI. One, he said, is intelligent finding analytics (IFA), where they use AI to limit the amount of findings presented to the user. The other is something called intelligent code analytics (ICA), which tries to determine what the security information of methods might be, or APIs. 

“The history behind the two AI pieces we have built into AppScan is interesting,” Duer explained. “We were making our first foray into the cloud and needed an answer for triage. We had to ask ourselves new and very different questions.  For example, how do we handle simple ‘boring’ things like source->sink combinations such as file->file copy?  Yes, something could be an attack vector but is it ‘attackable’ enough to present to a human developer? Simply put, we could not present the same amount of findings like we had in the past.  So, our goal with IFA was not to make a fully locked-down house of protection around all pieces of our code, because that is impossible if you want to do anything with any kind of user input. Instead we wanted to provide meaningful information in a way that was immediately actionable.

“We first tried out a rudimentary version of IFA to see if Machine Learning could be applied to the problem of ‘is this finding interesting,’ ” he continued. “Initial tests came back showing over 90% effectiveness on a very small sample size of test data. This gave the needed confidence to expand the use case to our trace flow languages.  Using attributes that represent what a human reviewer would look at in a finding to determine if a developer should review the problem, we are able to confidently say most findings our engine generates with boring characteristics are now excluded as ‘noise.’ ”  

This, Duer said, automatically saves real humans countless hours of work. “In one of our more famous examples, we took an assessment with over 400k findings down to roughly 400 a human would need to review. That is a tremendous amount of focus generated by a scan into the things which are truly important to look at.”

While Duer acknowledged the months and even years it can take to prepare data to be fed into a model, when it came to AI for auto-remediation, Cuddy picked up on the liability factor. “Let’s say you’re an auto-remediation vendor, and you’re supplying fixes and recommendations, and now someone adopts those into their code, and it’s breached, or you have an incident or something goes wrong. Whose fault is it? So there’s those conversations that still sort of have to be worked out. And I think every organization that is looking at this, or would even consider adopting some form of auto-remediation is still going to need that man in the middle of validating that recommendation, for the purposes of incurring that liability, just like we do every other risk assessment. At the end of the day, it’s how much [risk] can we really tolerate?” 

To sum it all up, organizations have important decisions to make regarding security, and adopting AI. How much risk can they accept in their code? If it breaks, or is broken into, what’s the bottom line for the company? As for AI, will there come a time when what it creates can be trusted, without laborious validation to ensure accuracy and meet compliance and legal requirements? 

Will tomorrow’s reality ever meet today’s hype?

 



Source link