Microsoft Added AI to Notepad and It Created a Security Failure Because the AI Was Stupidly Easy for Hackers to Trick
https://futurism.com/artificial-intelligence/microsoft-added-ai-notepad-security-flaw
As Microsoft continues to force AI features onto users of its Windows operating system and other crucial software, glaring issues keep cropping up. Executives have promised to turn the platform into an agentic OS to the dismay of many users, with CEO Satya Nadella boasting that much of the companys code is now being written by AI while condemning those who use the newly-minted pejorative Microslop.
While new bugs in an operating system software update are certainly commonplace, some have noticed that the problem is getting worse than usual these days. Just last month, some Windows 11 enterprise users were aggravated after finding that their systems were stuck in an endless shutdown loop, a security risk if left unattended.
Even the companys Notepad app, which once allowed users to jot down notes in plain text, has turned into a bloated, AI-enhanced security liability. As malware researchers from the collective vx-underground found, the app has a remote code execution zero-day meaning a vulnerability in software unknown even to its creators.
-snip-
The latest Notepad bug is symptomatic of a much larger struggle for the tech giant. Last week, the Wall Street Journal published an investigation, quoting current and former employees, who found that Microsofts confusing branding and grating lack of cohesion between its AI products had frustrated and turned off users. Worse yet, the adoption rate of its Copilot AI chatbot, which was baked into Windows 11, is extremely slim, suggesting a significant lack of public enthusiasm for the flagship feature.
-snip-
It was always a very bad idea to use flawed, inherently unreliable generative AI for coding. Especially since a lot of developers are now so foolishly trusting they no longer check AI-generated code.
https://www.itpro.com/software/development/ai-generated-code-is-fast-becoming-the-biggest-enterprise-security-risk-as-teams-struggle-with-the-illusion-of-correctness
Aikido found that AI-generated code is now the cause of one-in-five breaches, with 69% of security leaders, engineers, and developers on both sides of the Atlantic having found serious vulnerabilities.
These risk factors are further exacerbated by the fact many developers are placing too much faith in the technology when coding. A separate survey from Sonar found nearly half of devs fail to check AI-generated code, placing their organization at huge risk.