A while ago, I watched an automated system do something that felt oddly familiar.

It kept retrying the same failing operation. Again and again. Nothing technically wrong with the logic. No exception was thrown. The retries were allowed. The timeout had not been reached.

But if a human were watching, they would have stopped it much earlier.

Not because of logic. Because of discomfort.

That moment stuck with me, because it highlighted something we rarely talk about in AI systems: emotion exists everywhere in the interaction, but nowhere in the system.

AI understands emotion. Systems do not.

Modern AI models are very good at emotional inference. They can detect frustration, adjust tone, soften language, and respond politely. In many cases, they are better at this than humans.

But that understanding lives inside the model, not the system.

Once a response is generated, the emotional context disappears. There is no persistent emotional state. No memory of rising frustration. No sense of eroding trust. No awareness that something has gone wrong too many times.

Emotion is treated like tone. Cosmetic. Disposable.

That works fine for chat.

It breaks down once systems start acting.

When autonomy enters the picture, emotion becomes operational

As soon as AI systems move beyond suggestion and into execution, emotional context stops being a UX concern and starts becoming a governance problem.

Think about systems that:

In these systems, emotional signals often matter more than raw success or failure.

Repeated failure often means frustration.Low trust should probably mean confirmation.High urgency combined with uncertainty should raise risk.Stress should slow things down, not speed them up.

Humans do this instinctively. Machines do not.

And today, most systems handle this implicitly.


The quiet mess under the hood

Right now, emotional reasoning is usually buried in places no one audits:

It works. Until it does not.

The problem is not capability. It is structure.

These approaches are implicit, scattered, and non-composable. Two agents can interpret the same situation differently. The system cannot easily explain why it escalated, paused, or pushed forward.

When something goes wrong, there is no clear answer to a simple question: why did the system decide this was okay?

Press enter or click to view image in full size

The real gap is not emotional intelligence

The gap is simpler and more uncomfortable.

Emotional state is not treated as a system variable.

We track task state. Execution state. Resource state. Error state.

But emotional state, if it exists at all, lives in prompts and heuristics.

Without that, emotional reasoning cannot be governed. It can only be improvised.


What emotional governance actually means

Emotional governance does not mean making AI more empathetic or expressive.

It means doing something very boring and very powerful:

For example:

These are not emotional responses. They are governance decisions.

And governance only works when it is explicit, inspectable, and auditable.

A possible direction: standardizing the boring parts

One possible way forward is to treat emotional state the same way we treat other system state.

  1. Define a small set of primitives.
  2. Define their structure.
  3. Define how they change.
  4. Define how they influence actions.

Just state, transitions, and thresholds.

That is the thinking behind the Aifeels specification, which explores what emotional governance might look like if handled as infrastructure instead of intuition.

It is intentionally narrow. Intentionally early. Intentionally incomplete.

Because the goal is not to be right. The goal is to be discussable.

Press enter or click to view image in full size

Why this should be open and unglamorous

If emotional governance matters, and I think it will, it should not live inside proprietary platforms or undocumented heuristics.

The point is not to make AI feel more.

It is to help autonomous systems know when to slow down, escalate, or stop.

That is not an emotional problem.

It is a systems problem.


If you are interested in one concrete proposal, the draft specification is here: https://github.com/aifeels-org/aifeels-spec

And if you think this framing is wrong, that is even better. Standards only get better when people disagree loudly and thoughtfully.