Vibe coding is making developers faster. ACM says it is also making software less secure.
Vibe coding is making developers faster. ACM says it is also making software less secure.

Development teams are adopting AI-assisted coding at a pace that has outrun any industry consensus on what the practice does to code security, reliability, and long-term maintainability. A new TechBrief from the Association for Computing Machinery's Technology Policy Council, published Wednesday, attempts to supply that reckoning.

The practice the brief calls "vibe coding" — describing software requirements in natural language and letting AI tools generate, debug, and in some cases execute the result — is gaining ground across enterprise and developer workflows. The ACM's Technology Policy Council, which sets policy positions for the world's largest computing professional society, argues that the speed gains are real but come attached to engineering risks organisations are not yet accounting for systematically.

It’s making developers dramatically more effective, but it’s also introducing security vulnerabilities, increasing technical debt, and producing code that can be difficult to maintain. To use these tools safely, strong software engineering practices are still required, including clear specifications, meaningful testing, and enforced standards.

Simson Garfinkel (Chief Scientist, BasisTech)

Simson Garfinkel, Chief Scientist at BasisTech and lead author of the brief, uses these tools himself. "It's making developers dramatically more effective, but it's also introducing security vulnerabilities, increasing technical debt, and producing code that can be difficult to maintain," he said. "To use these tools safely, strong software engineering practices are still required, including clear specifications, meaningful testing, and enforced standards."

The brief identifies several failure modes. Security vulnerabilities can be inherited from patterns in the AI's training data. Testing coverage tends to be inconsistent or absent. Code produced this way often cannot be meaningfully reviewed by the humans who nominally own it. The council singles out agentic AI coding tools — systems that do not merely suggest code but execute it across connected services — as the sharpest emerging risk: the surface area for prompt injection, accidental data exposure, and unintended deletion is substantially larger when an AI agent operates with live permissions rather than making suggestions in an IDE.

Garfinkel's summary of the underlying problem is terse. "AI systems do not understand what they're producing, and they are not capable of reasoning about the consequences." The council's practical guidance runs to four points: validate AI output using formal methods and established testing practices, use automated tooling to audit for vulnerabilities, enforce human oversight before code reaches production, and ensure systems remain comprehensible to developers over time rather than accumulating opaque, AI-generated layers that no engineer fully understands.

The full TechBrief is available at https://dl.acm.org/doi/book/10.1145/3807518.

More News