Anthropic’s new auto mode for Claude Code lets AI execute tasks with fewer approvals, reflecting a broader shift toward more autonomous tools that balance speed with safety through built-in safeguards.
Anthropic hands Claude Code more control, but keeps it on a leash Rebecca Bellan 2:00 PM PDT · March 24, 2026 For developers using AI, “vibe coding” right now comes down to babysitting every action or risking letting the model run unchecked.
Anthropic says its latest update to Claude aims to eliminate that choice by letting the AI decide which actions are safe to take on its own — with some limits.
The move reflects a broader shift across the industry, as AI tools are increasingly designed to act without waiting for human approval.
The challenge is balancing speed with control: too many guardrails slows things down, while too few can make systems risky and unpredictable.
Anthropic’s new “auto mode,” now in research preview — meaning it’s available for testing but not yet a finished product — is its latest attempt to thread that needle.