(Photo: Agentic IDE is redefining the future of software development. Taken by Lac de Neuchatel, Switzerland. Image source: Ernest.)
Contents
tl;dr
- Kiro 1 is an Agentic IDE that represents a paradigm shift from traditional “Imperative Development” to “Intent-Driven Development.”
- Through Specs structured specifications, Steering project guidance, Autopilot autonomous execution, Hooks automated triggers, and workflow integration, Kiro constructs a complete human-machine collaborative development environment.
- This article attempts to deconstruct Kiro’s four-layer architecture (Intent, Knowledge, Execution, Oversight) and explore its strategic impact on technical organizations, attempting to understand future human-machine collaborative development patterns and software development workflows.
- In May this year, I shared “Reinventing Programming: How AI Transforms Our Enterprise Coding Approach” at AWS Summit Hong Kong 2025, deconstructing from the SDLC perspective. Those interested can read it alongside the slides.
1. Paradigm Shift: From “Imperative Development” to “Intent-Driven Development”
For a long time, the core of software development has been “Imperative Development” — developers need to tell computers precisely “how to do” each step. Developers control machine operations by writing lines of code. However, as software system complexity grows exponentially, this approach is hitting efficiency ceilings.
A new mindset is emerging: Intent-Driven Development. Its core idea is that developers focus on clearly expressing “why to do” (Why) and what they expect “to accomplish” (What), while leaving the tedious “how to do” (How) to more intelligent systems. This isn’t just automation—it’s a higher level of abstraction thinking.
Kiro and similar “Agentic IDEs” are concrete implementations of this thinking. Kiro isn’t just a code generation tool, but an intelligent agent that attempts to understand developer intent, autonomously plan and execute tasks. Let’s try to deconstruct Kiro’s system design approach and explore what deeper strategic value it might bring to our organizations.
I always start by viewing new products and services positively, applying them to various known scenarios, exploring potential value across different dimensions. Everyone can try deconstructing in their own way—we don’t need to have identical results, as that makes discussions more interesting.
2. Kiro’s System Architecture
For Kiro’s capabilities, we might abstract them into four collaborating logical layers:
- Intent Layer,
- Knowledge Layer,
- Execution Layer,
- Oversight Layer.
These four layers together form a complete human-machine collaboration loop.
2.1. Intent Layer: User Input Parsing
The Intent Layer is responsible for capturing and understanding developers’ original intentions. Kiro provides multiple channels to handle different forms of intent input.
When using Amazon Q CLI, Claude Code, Cursor, or Cline previously, I encountered more scattered and temporary intents, or when handling two or more dimensional task objectives simultaneously, LLMs often get stuck in details and can’t extricate themselves (forgetting original goals or overall tasks). Kiro’s attempt to collaborate between structured intent and unstructured intent is challenging.
Spec (Structured Intent)
This is Kiro’s core intent capture mechanism. It transforms vague development requirements into AI-understandable plans through a structured process. The workflow includes three core documents:
requirements.md
:design.md
:- AI generates technical design solutions including sequence diagrams based on requirements and existing knowledge.
tasks.md
:- Breaks down technical design solutions into a series of concrete, executable programming tasks.
This mechanism attempts to reduce intent transmission loss from product requirements to engineering execution.
Vibe and Terminal (Unstructured Intent)
Vibe
- Is one of the Chat modes,
- Provides a conversational interface for quick Q&A, exploratory learning, and debugging, handling more scattered and immediate intents.
Terminal
can convert natural language commands (like “install project dependencies”) into precise Shell commands, reducing developers’ memory burden.
2.2. Knowledge Layer: Providing Decision Context
If the Intent Layer is about “problems” and “direction,” the Knowledge Layer is the “knowledge base” that AI relies on when answering questions. An Agent’s intelligence level depends somewhat on the breadth and depth of its knowledge. (Same when recruiting team members, right?!)
Codebase Indexing (Short-term Memory - Internal Project Knowledge)
Kiro automatically indexes the entire project’s code in the background, and you can also use Command Palette to ask Kiro to Re-index. Similar to Claude Code’s /init
command, but in Kiro as an agent, it’s automatically (proactively) handled.
This enables AI (LLM + software) to understand project-internal function calls, class structures, and programming patterns, providing highly context-aware suggestions.
Indexing scenarios include:
- Project Import: Automatically index all files when first opening a workspace.
- File Changes: Index new or modified files.
- External Changes: Re-index files modified outside Kiro.
Steering (Long-term Memory - Project Guidance Principles)
This is an extremely strategically valuable feature—highly recommended!
- When previously sharing how to introduce Amazon Q CLI, Claude Code, Cursor, Cline and other AI software development tools with development teams and friends, most partners could master the more intuitive approaches, such as writing README.md, SPEC.md specification description files, but often overlooked guidance principles, constraints, workflows and other implicit methodologies.
- Foreign companies often have steering teams or steering committees in strategic setups, helping establish guidance principles across departments or projects, synchronize strategic directions, make decisions, allocate resources, etc., facilitating smoother project operations and progress.
- Kiro’s introduction of the Steering concept in software development IDEs is similar to our team’s encouragement for engineers to engage more with product management (Dept of Product & Technology Integration).
- I sense something—maybe in the future this IDE can also develop glasses, robots, drones and other products with physical characteristics. Looking forward to it.
Appropriately describing constraints will help AI software development tools focus better.
Sidebar:
- If you’re curious about what guidance principles exist, I recommend referencing the AWS Well-Architected knowledge framework.
- Actually, AWS Well-Architected isn’t just about planning and building on AWS Cloud—it’s more about giving us connections and considerations across various dimensions between software development and business needs.
- I encourage everyone to try organizing cross-departmental teams (e.g., business operation, product operation, software architect, engineering, testing, infra operation, etc.) to read this framework document together within the organization, then integrate outputs suitable for your organization into Steering files.
Development teams can write “team knowledge” such as project architecture specifications, design patterns, preferred libraries, naming conventions, etc., into Markdown files in the .kiro/steering/
directory.
Default Steering files include:
product.md
:- Define product purpose and goals
tech.md
:- Record technical architecture (stack) and constraints
structure.md
:- Outline file organization and architectural decisions
Before executing any task, AI will prioritize referencing these “guidance principles” to ensure its output aligns with the team’s long-term specifications. This can be viewed as a kind of “Executable Architecture Documentation.”
For example, I would pile principles like “please add a half-width space between Chinese and English characters for readability,” “ask me questions if you have doubts to clarify what I haven’t explained clearly” 5, “please handle every detail completely, don’t skip or omit, list all thoughts completely,” “Amazon Working Backward,” “DDD” into these guidance principle files.
2.3. Execution Layer: Transforming Intent into Action
When AI understands “intent” and possesses “knowledge,” the Execution Layer is responsible for transforming these into actual actions on code.
Autopilot (Autonomous Task Executor)
After Specs
are approved, Autopilot mode can take over the task list defined in tasks.md
. It will autonomously modify, create, or delete files and attempt to complete all tasks. This is the ultimate expression of Kiro’s “Agentic” characteristics.
Two modes are provided:
- Autopilot Mode:
- Uses Autopilot mode by default
- Suitable for experienced users, autonomously completing complex tasks
- Supervised Mode:
- Suitable for new users, allows step-by-step review of changes (bite me, I’m just scared XDD
Hooks (Event-Triggered Executor)
Hooks provide an event-driven automation mechanism.
Developers can set up AI-driven actions to trigger on specific events (like On-Save
, On-Create
). For example, when saving a TypeScript file, automatically add or update test files for it.
2.4. Oversight Layer: Human-Machine Collaboration Interface
In any high-risk automation system, human oversight is quite important. Or more abstractly: in any system, oversight is quite important.
Kiro’s design ensures developers always remain in the “driver’s seat” (Human-in-the-Loop). (Well… so you can’t put Kiro in a Robotaxi to execute… because… :p
Main control mechanisms include:
- Staged Approval:
- At each stage of the
Specs
process (requirements, design, tasks), developers need manual confirmation
- At each stage of the
- Command Execution Confirmation:
- Commands generated by the natural language terminal need users to click “Execute” to take effect
- Code Change Review:
- All code changes generated by Autopilot or Hooks are presented in clear Diff format, awaiting developers’ final review and submission
3. Strategic Impact on Technical Organizations
Introducing Agentic tools like Kiro affects not just individual developer efficiency, but the entire technical organization’s operating model. Let me try asking various questions from different angles—these are some questions I’ve asked myself in my past notes.
Maybe everyone has similar thoughts—we can discuss and chat together on Threads or X.
3.1. Productivity Model Transformation
- Could ROI measurement standards potentially add measuring Intent-to-Production Cycle Time?
- When AI can generate large amounts of code, does developer (or product manager) value shift to clarity of intent expression, quality of
Steering
rule design, and depth of understanding of overall system architecture? Traditional productivity metrics often ignore some key hidden costs: How long does it take teams to become proficient at writingSteering
document techniques? When AI can quickly output code, do senior developers experience “skill devaluation” anxiety? Does frequent AI suggestion review create new cognitive burden? - When
Steering
becomes the “intelligent core” of organizational workflows, productivity measurement might need to include new dimensions like rule quality metrics, knowledge reuse rates, intent communication efficiency. In your organization, what kind of developer would be considered “most productive”? The one who writes code fastest, or the one who can best design reusableSteering
rules? - AI collaboration might bring significant short-term efficiency improvements, but long-term value creation requires more careful consideration. Technical decision quality, team capability building, innovation space—all need to be clearly defined trade-offs in
Steering
documents.
3.2. Team Structure Evolution
- Might it spawn new roles?
- e.g., “Steering Architect” focusing on maintaining and optimizing the knowledge base that guides AI, “AI Coordinator” responsible for reviewing and guiding AI workflows, ensuring alignment with team goals. One manages nouns, one manages verbs? (I love simple English sentences XDD (I thank my English teacher for helping me write recommendation letters for admissions back then :p
- But the emergence of these new roles is just the surface manifestation of organizational transformation. The deeper change is: How do existing team members (regularly or periodically) redefine their value?
- Old (nouns) don’t go, new (value) doesn’t come?!
- “Steering Architect” isn’t just a document maintainer, but a curator of organizational intelligence: needs to transform business requirements, technical constraints, team culture into AI-understandable guidance principles, ensure consistency, completeness, and timeliness of
Steering
documents, and define how AI should prioritize various trade-offs in different contexts. In your organization, who would be most suitable as a Steering Architect? A senior technical architect, or a product manager with cross-domain communication skills? - When AI or Agentic something becomes a “virtual team member,” traditional team dynamics will face readjustment. Does responsibility attribution become blurred? Do communication patterns need transformation? Do decision-making processes need reshaping? How will organizational culture transformation occur? How can it occur? How should it occur? How to make human team members still feel valued in AI collaboration? Maybe Multi-agent among agents, one agent might discriminate against another agent? How to cultivate a mindset of “collaborating with AI” rather than getting stuck in anxiety about “being replaced by AI”? When personal knowledge needs to be “fed” to AI, will it affect knowledge sharing motivation?
Steering
documents aren’t just AI guidance principles, but carriers of organizational learning. They transform implicit team knowledge into executable rules, preserve and transmit best practices through documentation, and continuously optimize organizational work patterns through feedback from AI execution results. How often do you thinkSteering
documents should be updated? Who decides when updates are needed?
3.3. Technical Debt Management
- This is a double-edged sword. If left unmanaged, AI might produce large amounts of difficult-to-maintain code, accumulating technical debt. (Look back at vibe coding… then look toward vibe maintenance…
- If
Steering
functionality is used well, writing refactoring, TDD and other good practices into rules, could AI potentially become a powerful ally in paying down and preventing technical debt? - Technical debt management in the AI collaboration era faces challenges far more complex than they appear on the surface (don’t trust what your eyes see).
- Traditional technical debt management is often reactive, but
Steering
provides proactive solutions. Clearly define acceptable technical debt levels inSteering
, set conditions and priorities for automatic refactoring, prevent certain known anti-patterns through rules. (Decisive battle at the first moment?! - Try practicing balancing “fast delivery” and “code quality” priorities in
Steering
documents? Does this balance point need adjustment at different project stages? (Sigh, constantly growing new variables… but without variables, without dimensions, how do you get differentiation? - AI collaboration changes the cost structure of technical debt? Reviewing AI-generated code requires different mental models (or layers, or tools)?
- Teams (or customers) need time to build trust in AI-generated code. AI-generated code might have logic structures difficult for humans to understand.
- Besides code-level technical debt, AI collaboration might introduce new “cultural technical debt”: Will teams (after some time) lose the ability to independently solve complex problems? Might AI-generated code cause team understanding gaps in certain technical details? (Or was it like this even before AI?)
- In the context of AI collaboration, the definition of technical debt might need expansion: might add “intent debt” caused by unclear requirement expression, “Steering debt” caused by inconsistent or incomplete guidance principles, “collaboration debt” caused by poor human-machine collaboration patterns. These new types of technical debt might be harder to identify and quantify than traditional code debt, but their long-term impact on organizations might be more profound. Or maybe these were originally the neighboring department’s responsibility? Then it returns to the question of which department becomes unemployed? Optimistically, which department transforms first?
3.4. Knowledge Management Revolution
- The
Steering
document library itself is a “living” knowledge base. It’s no longer Wiki documentation disconnected from code, but executable specifications that directly impact output, greatly reducing knowledge loss and gap risks. But the significance of this change goes beyond improving document management: When organizational knowledge can directly "guide" AI behavior, what fundamental changes occur in the strategic significance of knowledge management? Do organizations have this knowledge? How to introduce it? - Traditional knowledge management often stops at the “recording” stage (um, taking notes, writing documents), while
Steering
pushes knowledge management toward the “execution” stage. - How can senior developers’ “intuition” be transformed into AI-understandable rules? Can team workflows be embedded into AI’s decision logic? Can organizational values and principles influence every line of code through
Steering
? (There will probably still be omissions, but pursue percentage improvements, not 100%.) (I’m optimistic) - In your organization, which “can only be understood, not expressed” knowledge is most worth transforming into
Steering
rules? What important context will be lost in this transformation process? - Could
Steering
documents become new carriers of organizational knowledge inheritance? Among existing documents or processes, which are closest toSteering
documents? - Can departing employees’ experiences (many in small businesses?! Not few in large companies?!) be continued through
Steering
rules? (Or is it needed? Maybe inappropriate source knowledge shouldn’t be poured in?) - New employees can learn organizational work patterns by observing AI behavior.
Steering
ensures organizational core knowledge is no longer only held by a few people—a proper new employee handbook! - When senior employees leave, how can their knowledge be effectively transformed into
Steering
rules? How long does this process take? Who’s responsible for verifying the accuracy of this knowledge? (Or pre-planting unexploded bombs?!) (I quit, you explode?! Steering
introduction might change knowledge power structures in organizations. Whose knowledge gets incorporated intoSteering
rules becomes more transparent (right? more transparent, right?)- Technical influence might be more reflected in
Steering
rule adoption rates. In your organization, who has the authority to decide which knowledge should be incorporated intoSteering
rules? Is this decision process transparent (and fair)? - Finally, the knowledge management revolution driven by
Steering
also brings some implicit challenges: Will over-reliance onSteering
rules limit thinking diversity? Will successfulSteering
rules suppress further knowledge exploration? When knowledge becomes collectivized, will individual knowledge responsibility be diluted? These challenges require organizations to maintain sensitivity to knowledge management complexity while enjoying the knowledge management benefits thatSteering
brings. (Experts are a group of well-trained…)
4. Bottomline: Getting Ready for Single-Agent to Multi-Agent to Agentic Age
Currently, Kiro is mainly a versatile single Agent operating with multiple modes. But from AI field development trends, the next step will very likely be multi-agent system collaboration. We can imagine that in the future, there might be specialized “Quote Agent” for handling quotes, “UI Agent” for frontend, “DBA Agent” specialized in databases, and “Security Agent” for security scanning. Under the command of a General Agent coordinator (just one would be good, right?), they would engage in collaborative development around a group of Specs
. This would be another huge leap in software development (probably).
Agentic IDEs like Kiro aren’t just efficiency-boosting tools—they’re more like a new layer of “operating system” (or Sandbox) that reshapes the interaction relationship between developers and computers.
For managers, the key task now isn’t to argue whether AI will replace humans or to be self-anxious, but to think: How to design an organizational architecture and workflow that maximizes human-machine collaboration effectiveness? (Or saying “maximize” might be too stressful—we can just say “improve.”)
Understanding and beginning to experiment with introducing these tools, and cultivating the team’s ability to “command” AI rather than being “replaced” by AI, will be the core of maintaining technical competitiveness in the coming years.
References
FAQ
Technical Architecture
- Q: How does Kiro.dev’s Agentic IDE architecture differ from traditional IDEs?
- Traditional IDEs primarily provide code editing, compilation, and debugging features, while Agentic IDEs 6 possess autonomous decision-making and action capabilities, able to understand goals and plan actions to complete complex tasks.
- Agentic IDEs aren’t just passive auto-completion, but autonomous agents capable of reasoning, adapting, and taking action within development environments.
- Q: What advantages does the EARS notation have in software requirements engineering?
- EARS 3 4 uses a few keywords and simple underlying rule sets to gently constrain natural language requirements, making requirements follow temporal logic and maintain consistent clause ordering.
- EARS reduces or eliminates common problems in natural language requirements, particularly suitable for non-native English speakers writing requirements.
Workflow
- Q: How do file event-triggered automation workflows improve development efficiency?
- Kiro’s Agent Hooks can eliminate the need for manual requests for routine tasks, (as much as possible?) ensure codebase consistency, maintain consistent code quality through setting up hooks for common tasks, prevent security vulnerabilities, reduce manual overhead, and standardize team processes.
Troubleshooting
- Q: Kiro seems to have some issues with WSL2? How to fix it?
- You can refer to this Japanese solution - AWSのAIコーディングエージェントKiroをWSL2から起動する設定 #AWS - Qiita.
- My colleague experimented with it—it works.
- When my Cursor and Q CLI interfered with each other previously, I used similar solutions. The general direction is clarifying
$PATH
, clarifying shell and profile.
- Q: When encountering problems with Kiro, how can I report issues?
- The development team actively monitors Kiro issues: https://github.com/kirodotdev/Kiro/issues
- You can frequently report problems you encounter (also practice your problem description skills), browse issues to see how others play and encounter problems, and frequently give +1s.
Official Documentation
- Kiro Getting Started
- Kiro First Project
- Kiro Editor Interface
- Kiro Codebase Indexing
- Kiro Specs Concepts
- Kiro Specs Best Practices
- Kiro Chat Autopilot
- Kiro Chat Vibe
- Kiro Chat Terminal
- Kiro Hooks
- Kiro Hooks Types
- Kiro Hooks Management
- Kiro Hooks Best Practices
- Kiro Hooks Examples
- Kiro Hooks Troubleshooting
- Kiro Steering
Further Reading
Easy Approach to Requirements Syntax (EARS) | IEEE Conference Publication | IEEE Xplore ↩︎
Adopting EARS Notation for Requirements Engineering - Jama Software ↩︎ ↩︎
Thanks to BobChao for sharing. I recommend his professional services. ↩︎
Agentic IDEs: Next Frontier in Intelligent Coding - The New Stack ↩︎