Artificial intelligence is transforming the nature of software engineering, shifting it from a primarily code-centric discipline toward one that emphasizes systems thinking, verification, and orchestration. Emerging paradigms such as “vibe coding” and defensive AI programming illustrate how the field now demands a broader understanding of how intelligent systems interact and evolve. While AI offers unprecedented productivity gains to those who integrate it effectively, engineers must cultivate the discernment, rigor, and methodological structure required to maintain reliability, quality, and correctness at scale.

Evolution From Builders to Curators

Software engineering has long been associated primarily with coding. Entry level engineers focus on churning out code. As they grow in their careers, they learn more about system design, architecture, etc. But with the rise of AI-assisted coding tools, that primary identity is shifting.

Today, engineers are becoming curators of intelligence rather than just implementers. Engineers are no longer writing every function by hand. AI is transforming every aspect of the software development lifecycle.

Building

Engineers are now leveraging AI IDEs and LLMs like Cursor, Windsurf, Copilot, Claude to generate a lot of the code. There are AI Tools which can be used to generate and build website and app prototypes.

Maintaining

Maintaining legacy code used to be a big nightmare for engineers. But, today AI assisted onboarding is making lives easier. AI is helping generate documentation, helping write unit tests for legacy code that explain system behavior, helping fix complex scaling bugs, etc.

Code review

AI is assisting reviewers review code, helping understand system design and connect complex documentation across systems.

The craft of engineering is expanding from the mechanics of code to the art of directing AI to assist across the entire software engineering lifecycle .

The Rise of “Vibe Coding”

In this new era, many developers are embracing what’s been informally dubbed “vibe coding.” It’s the practice of using AI-assisted tools - copilots, code generators, or agents - to rapidly produce scaffolding, tests, or even entire feature sets based on intent. Vibe coding comes with an enticing promise of reducing time to market effectively going from idea to prototype while offloading tedious boilerplate work. However, in practice vibe coding introduces a hidden overhead - cycles spent reviewing and debugging, and reinforcement to ensure the AI’s output aligns with production standards.

The Judgment Gap

AI tools flatten certain technical barriers while deepening others. Github CEO Thomas Dohmke talks about the 4 different archetypes (AI Skeptic, AI Explorer, AI Collaborator, AI Strategist) when it comes to AI usage in his blogpost : Developers Reinvented.

AI Explorers can now generate working code faster than ever. But without proper system level context and understanding, they may mistake plausibility for correctness. The AI’s confidence can mask critical flaws - missing edge cases, unsafe assumptions, or brittle architectures. This iterative process and surprises may even lead to dismissive thoughts on AI adoption.

AI Strategists, conversely, use AI as leverage. They employ it to explore architecture options, refactor legacy systems, or test design hypotheses. Their strength lies in deciding when to trust automation - and when to overrule it.

The result is a growing judgment divide: AI shifts effort away from manual coding and toward cognitive evaluation. The best engineers who are able to scale themselves with AI tools pragmatically are emerging as AI strategists.


AI Strategists

In the current state, there are limitations of what AI can do. Some developers report fatigue because they have to spend more time fixing AI generated code before it is ready to ship to production so that it works with the rest of the ecosystem. Some developers understand the limitations of current AI capabilities and delegate appropriate tasks and try to build leverage rather than be frustrated by it. The AI strategists delegate the appropriate amount of work and act as orchestration and verification agents who deploy system thinking at scale

This example of a .github/copilot-instructions.md file contains three instructions that will be added to all chat questions. This is an example of guidance provided to AI to narrow the problem space concretely to get better output :

We use Bazel for managing our Java dependencies, not Maven, so when talking about Java packages, always give me instructions and code samples that use Bazel.

We always write JavaScript with double quotes and tabs for indentation, so when your responses include JavaScript code, please follow those conventions.

Our team uses Jira for tracking items of work.

Here is another sample instruction

# AI Instructions for This Repository

** Purpose **
Backend API for a task management app (users, projects, and notifications).

## Code Overview

- src/models/ → SQLAlchemy models

- src/routes/ → FastAPI routes

- src/services/ → Business logic

- tests/ → pytest unit tests

## Coding Style

- Use async FastAPI endpoints.

- Prefer dependency injection over globals.

- Validate all input with pydantic models.

- Always write tests for new routes.

## Security Rules

- Never log passwords or tokens.

- Use parameterized queries only.

- Hash passwords with bcrypt.

- Use Depends(get_current_user) for protected routes.

## What to Suggest

✅ Add new routes, models, or services consistent with existing patterns.
✅ Include docstrings and type hints.
❌ Do not suggest ORM raw SQL or direct string concatenation.
❌ Do not invent new dependencies.

## Example

If asked to add a new /tasks endpoint:

Create src/routes/tasks.py

Use pydantic models in src/models/task.py

Add tests in tests/test_tasks.py

## AI Summary:
Follow FastAPI + SQLAlchemy conventions, maintain security hygiene, and ensure all new features are validated and tested.

Spec Driven development : As a developer you can provide the specs of the code in markdown and let AI generate the code for you. Example database schema in Markdown:

## Database

SQLite database in {Config.DbDir}/{Config.Organization}.db (create folder if needed). Avoid transactions. Save each GraphQL item immediately.

### Tables

#### table:repositories

- Primary key: name

- Index: updated_at

- name: Repository name (e.g., repo), without organization prefix

- has_discussions_enabled: Boolean indicating if the repository has discussions feature enabled

- has_issues_enabled: Boolean indicating if the repository has issues feature enabled

- updated_at: Last update timestamp

...main.md continues...

Importance of code review

AI-generated code often optimizes for surface correctness - what looks right - rather than deep robustness. This may introduce new risks:

Teams quickly learn that AI productivity comes with an innovation tax - the cost of review, correction, and stabilization. Its extremely important for developers to realize that they are still accountable for the code they are shipping as authors and reviewing as reviewers.

Defensive AI Programming

To embrace AI advancements in programming, organizations must develop a culture of Defensive AI Programming. Helping develop a set of principles and practices that balance AI magic with engineering thoroughness, organizations can ensure that the right guardrails are in place to enable experimentation with AI :

Defensive AI Programming ensures that human judgment remains at the core of automated productivity. The craft of software engineering evolves and endures, even as the tools transform.