What This Article Examines

Consider the sentence: "All credentials must be verified before access is granted."

It appears normal. It sounds official. But two essential questions remain: Who is supposed to verify them? Who gave the instruction?

These questions anchor the published article Syntax Without Subject. The study investigates how artificial intelligence, particularly large language models (LLMs), produces documents that regulate behavior, impose duties, and outline procedures, yet do so without identifying a speaker, an agent, or an institution. The surface grammar remains intact. The source disappears.

This phenomenon is not limited to edge cases. It is becoming the norm in AI-generated legal disclaimers, onboarding flows, privacy notices, and internal compliance documents.

What the Corpus Shows

A total of 172 institutional texts generated by LLMs were analyzed. These documents came from three domains where attribution and responsibility are legally required:

Law: including contracts, terms of service, policy statements

Healthcare: including patient onboarding documents, consent templates

Administration: including HR manuals, IT protocols, compliance checklists

Across these outputs, three grammatical strategies appeared repeatedly:

2.1 Passive constructions Examples such as: “Data shall be retained for security purposes.” Here, the sentence performs a directive function without naming the actor. It hides the source of obligation. In legal terms, this undermines enforceability.

2.2 Nominalizations Examples such as: “Failure to comply with the submission requirement will result in denial.” Here, the verb submit is turned into the noun submission. The person who is expected to act is eliminated from the sentence structure. The syntax is formal, yet it no longer contains a participant.

2.3 Instruction templates without subjects Examples such as: “Complete onboarding before accessing records.” This is an imperative clause with no subject. It directs behavior, but the sentence gives no clue as to who should act or who enforces the rule.

Each of these examples is grammatically correct. In many cases, they conform to institutional style guides. What unites them is that they erase the speaker syntactically.

Why This Has Consequences

Where no one is named, no one is responsible.

In legal and institutional contexts, language must not only function—it must anchor duties, permissions, and liabilities to identifiable agents. Clauses that perform functions without specifying who is involved create unattributable obligations.

Consider a real-world example. In March 2024, an AI-generated privacy update from a mid-size financial platform included this line: “Personal data may be used to improve user experience.” No actor was specified. When challenged by a regional privacy commission, the company could not demonstrate who authored or approved the sentence. The clause was ruled non-compliant under Article 5 of the GDPR.

The article proposes the notion of a traceability threshold. This marks the point at which a sentence continues to exert formal authority, but no longer contains any referential anchor to a person or institution. Beyond this point, grammar functions like code, it does not describe. It executes.

What This Reveals About AI-Governed Language

As public and private institutions adopt LLM-generated language, we observe a structural inversion. Instead of authors producing rules, compiled grammatical forms are deployed to simulate authority. The sentence remains, but the speaker is missing.

This change does not resemble propaganda or censorship. It emerges through formatting.

Several government agencies and corporations are now experimenting with LLM-generated drafts for internal use. In many cases, these drafts are deployed without post-editing. When internal directives carry no author, no named unit, and no signatory, they cease to be communicative. They become synthetic operations.

The danger lies in the appearance of normality. These sentences look professional, clear, and neutral. However, their neutrality is not communicative. It is structural. Authority persists, but accountability dissolves.

This shift redefines what it means for language to govern.

The Stakes

If we continue to accept language that commands without speaking, that regulates without attribution, we reshape the foundations of law, policy, and institutional legitimacy. Directives are no longer written. They are compiled. What matters is not who says them, but whether the sentence looks official.

This logic affects public-facing systems. It also affects internal governance. Compliance software, automated auditing, HR communication tools, and even municipal interfaces already adopt AI-generated text. The formal quality increases. The referential clarity decreases.

Language continues to operate. But the speaker is no longer required.

🔗 Article Syntax Without Subject: Structural Delegation and the Disappearance of Political Agency in LLM-Governed Contexts https://doi.org/10.5281/zenodo.16571077

Agustin V. Startari

Linguist and researcher in historical and artificial language systems

ORCID: https://orcid.org/0009-0001-4714-6539

SSRN Author ID: 7639915

Ethos

I do not use artificial intelligence to write what I do not know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored. — Agustin V. Startari