How AI coding agents function and their limitations

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

AI coding agents represent a significant advancement in software development, powered by large language models (LLMs) trained on vast datasets of text and code. These models act as pattern-matching systems, generating outputs based on prompts by interpolating from training data. Refinements such as fine-tuning and reinforcement learning from human feedback enhance their ability to follow instructions and utilize tools.

Structurally, these agents feature a supervising LLM that interprets user tasks and delegates them to parallel subagents, following a cycle of gathering context, taking action, verifying results, and repeating. In local setups via command-line interfaces, users grant permissions for file operations, command execution, or web fetches, while web-based versions like Codex and Claude Code operate in sandboxed cloud environments to ensure isolation.

A key constraint is the LLM's finite context window, which processes conversation history and code but suffers from 'context rot' as token counts grow, leading to diminished recall and quadratic increases in computational expense. To mitigate this, agents employ techniques like outsourcing tasks to external tools—such as writing scripts for data extraction—and context compression, which summarizes history to preserve essentials like architectural decisions while discarding redundancies. Multi-agent systems, using an orchestrator-worker pattern, allow parallel exploration but consume far more tokens: about four times more than standard chats and 15 times for complex setups.

Best practices emphasize human planning, version control, and incremental development to avoid pitfalls like 'vibe coding,' where uncomprehended AI-generated code risks security issues or technical debt. Independent researcher Simon Willison stresses that developers must verify functionality: "What’s valuable is contributing code that is proven to work." A July 2025 METR study found experienced developers took 19% longer on tasks with AI tools like Claude 3.5, though caveats include the developers' deep codebase familiarity and outdated models.

Ultimately, these agents suit proof-of-concept demos and internal tools, requiring vigilant oversight since they lack true agency.

Relaterte artikler

Realistic illustration of Linux Foundation executives and AI partners launching Agentic AI Foundation, featuring collaborative autonomous AI agents on a conference screen.
Bilde generert av AI

Linux Foundation launches Agentic AI Foundation

Rapportert av AI Bilde generert av AI

The Linux Foundation has launched the Agentic AI Foundation to foster open collaboration on autonomous AI systems. Major tech companies, including Anthropic, OpenAI, and Block, contributed key open-source projects to promote interoperability and prevent vendor lock-in. The initiative aims to create neutral standards for AI agents that can make decisions and execute tasks independently.

In 2025, AI agents became central to artificial intelligence progress, enabling systems to use tools and act autonomously. From theory to everyday applications, they transformed human interactions with large language models. Yet, they also brought challenges like security risks and regulatory gaps.

Rapportert av AI

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

Larian Studios has detailed its use of machine learning for efficiency in Divinity development, while confirming a ban on generative AI for concept art and enhanced protections for voice actors, as clarified by Machine Learning Director Gabriel Bosque.

Rapportert av AI

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

The GNOME Shell Extensions store has updated its guidelines to prohibit AI-generated extensions amid a surge in low-quality submissions. Developers may still use AI as a tool for learning and development, but code primarily written by AI will be rejected. This move aims to maintain code quality and reduce review delays.

Rapportert av AI

OpenAI has launched ChatGPT-5.2, a new family of AI models designed to enhance reasoning and productivity, particularly for professional tasks. The release follows an internal alert from CEO Sam Altman about competition from Google's Gemini 3. The update includes three variants aimed at different user needs, starting with paid subscribers.

 

 

 

Dette nettstedet bruker informasjonskapsler

Vi bruker informasjonskapsler for analyse for å forbedre nettstedet vårt. Les vår personvernerklæring for mer informasjon.
Avvis