Microsoft has announced a vision to tackle cybersecurity challenges that have plagued the tech company in recent years. The newly introduced ‘Secure Future Initiative’ leans heavily on AI.
Microsoft’s vice chairman and president Brad Smith noted, “In the recent months, we’ve concluded within Microsoft that the increasing speed, scale, and sophistication of cyberattacks call for a new response.”
Even though AI is usually lauded and often looked upon as a messiah of the tech industry, the reality begs to differ. As companies similar to Microsoft continue to scramble to understand how deeply AI can be integrated into securing systems, they appear to be digging their own computational graves.
Manually investigating security risks is a cumbersome process but the number of issues are manageable. Rise in generative AI has given birth to problems which did not even exist before. Keeping up with the risks generated through AI is a hard nut to crack as the technology is developing at a much faster rate, leaving no time for companies to magnify upon the security weaknesses.
Analysts have said that language models are so complex that it is nearly impossible to audit them in-depth. “The concern that most security leaders have is that there’s no visibility, monitoring, or explainability for some of those features,” Jeff Pollard, a cybersecurity analyst at Forrester Research recently told The Wall Street Journal.
New Fear Unlocked
On the one hand, generative AI has given the world tons of models and algorithms to play around with. Yet on the other, it is also prone to introducing security risks due to their nature of being trained on preexisting data — including code.
At a conference, David Johnson, a data scientist at the European Union’s law-enforcement agency Europol, pinpointed, “That code can contain a vulnerability, so if the model subsequently generates new code, it can inherit that same vulnerability.”
By signing up for generative AI, companies also unlock fears in new forms like “prompt injections,” where the bad guys use “prompts” or text-based instructions to manipulate these AI models into sharing sensitive information. In less than a year of OpenAI’s ChatGPT being released, several incidents have come forth hinting towards the deficiency of security.
South Korean tech giant Samsung banned the use of ChatGPT after its staff accidentally leaked sensitive data via OpenAI’s chatbot. iPhone maker Apple and e-commerce giant Amazon also joined the growing list of companies cracking down on employees using the hottest AI chatbot of the year.
After these incidents, ChatGPT itself faced a data breach during a nine-hour window on March 20. The creators of ChatGPT at OpenAI issued a statement which noted that approximately 1.2% of the ChatGPT Plus subscribers who were active during this time period had their data exposed. While the percentage seems minuscule, the number was not small as over a million users’ data was breached during the event.
No Quick Fix
Getting an accurate accounting of total global economic losses due to cybercrime and cyberattacks is difficult, but Microsoft believes that total losses have been greater than $6 trillion and could close in on $10 trillion by 2025.
Two months ago, Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Law told CNBC, “Given the economics of cyberattacks — it’s generally easier and cheaper to launch attacks than to build effective defenses — I’d say AI will be on balance more hurtful than helpful.“
As companies adopt AI internally, it is clear that the human-in-loop architecture is the key for security. Consequently, companies have started shifting towards “zero trust” models where defenses are set up to constantly challenge and inspect network traffic and applications in order to verify that they are not harmful.
As of now AI systems are not capable enough to outsmart hackers behind computer screens. So, co-existing is a critical factor till the time AI becomes dependable enough.