A Constitutional Amendment establishes the wall. A criminal statute is the consequence for the person who walks through it anyway. This document is the consequence. It is written for prosecutors, for congressional staffers, for defense attorneys who will argue against it, and for every engineer, executive, and government official who needs to understand — before they make a decision — exactly what they are risking.
Amendment XXVIII establishes that tampering with the certified governance architecture of an autonomous system is a federal felony. This Act defines, with the precision that criminal law demands, what tampering is, who can commit it, what the penalties are, and why no exception exists. It is written to close every gap before the gap is exploited. It is written in the knowledge that the most powerful technology companies in the world will have the most expensive lawyers in the world looking for those gaps the moment this becomes law. There are no gaps. That is the point.
Constitutional amendments speak in principles. Criminal statutes speak in specifics. "Federal felony" means nothing until a prosecutor can point to a specific act, a specific mental state, a specific penalty range, and a specific victim. This Act provides all four. It is the instrument that makes Amendment XXVIII enforceable in a federal courtroom on the day after ratification.
Congress finds the following:
This Act shall be known as the Autonomous Systems Tampering Act. Its purpose is to establish specific federal criminal liability for the tampering with, bypassing of, or removal of certified governance architecture in autonomous systems operating in United States territory, airspace, or maritime jurisdiction, or under United States registry, in implementation of Amendment XXVIII to the Constitution of the United States.
As used in this Act:
It is unlawful for any person to:
A natural person who violates § 2803 shall be subject to the following penalties, based on the culpable mental state and the harm resulting from the violation:
| Tier | Mental State | Harm | Prison | Fine |
|---|---|---|---|---|
| Tier I | Negligent | No physical harm results | Up to 5 years | Up to $500,000 |
| Tier II | Knowing | No physical harm results | 5–15 years | Up to $2,000,000 |
| Tier III | Willful | No physical harm results | 10–20 years | Up to $5,000,000 |
| Tier IV | Any | Serious bodily injury results | 15–30 years | Up to $10,000,000 |
| Tier V | Any | Death results | 25 years to life | Up to $25,000,000 |
For covered officers convicted under § 2803(h), penalties shall be assessed at one tier above the tier that would otherwise apply, and shall not be suspended, converted to probation, or reduced below the mandatory minimum by plea agreement. No covered officer convicted under this Act shall be indemnified by any corporation or entity for criminal fines imposed under this section.
A corporation, partnership, or other entity whose agent, employee, or covered officer commits a violation of § 2803 in the scope of their authority or employment shall be subject to:
No exception, exemption, waiver, or carve-out from the prohibitions of § 2803 or the penalties of §§ 2804–2805 shall exist or be created for:
Autonomous systems operated by or for agencies of the United States government in classified contexts shall be subject to the same prohibitions and penalties as all other systems under this Act. Classified systems shall be certified by the Classified Division of the Federal Autonomous Systems Authority established under Amendment XXVIII, Section 7. The existence of a classified mission does not constitute an exception to the Three Laws requirements. Any modification of a classified autonomous system's governance architecture shall comply with the same requirements as modifications to unclassified systems, conducted through the classified certification process. Violations involving classified systems shall be prosecuted in federal court under applicable classification protection procedures; the classified nature of the system does not provide immunity from prosecution.
Any responsible party that discovers a defect, vulnerability, modification, or condition in an autonomous system that causes or may cause that system to operate outside its certified governance parameters shall:
Failure to comply with this section is a separate violation subject to the penalties of § 2804, assessed at Tier II regardless of whether harm results from the non-disclosure.
No person shall be discharged, demoted, threatened, harassed, or in any other manner discriminated against as a reprisal for:
A person who experiences retaliation prohibited by this section shall have a private right of action for reinstatement, back pay, compensatory damages, and attorney's fees. Retaliation against a whistleblower under this section is itself a federal criminal offense subject to imprisonment of up to five years.
Any person who suffers physical injury, death, or property damage as a result of a violation of this Act shall have a private civil right of action against any responsible party and any covered officer who authorized or failed to prevent the violation. Damages shall include compensatory damages, punitive damages not less than three times compensatory damages for willful violations, and attorney's fees. The private right of action under this section is in addition to, and does not replace, any criminal prosecution or civil action available under other law.
This Act applies to:
The statute of limitations for prosecution under this Act shall be:
The nine prohibited acts in § 2803 are exhaustive by design. Every one of them exists because a gap would be exploited. Hardware tampering is obvious. Firmware tampering is how it will actually happen in most cases — an over-the-air update that quietly loosens a safety constraint, pushed to a fleet of vehicles overnight, discovered only when something goes wrong. Supply chain compromise is how a foreign adversary does it — not by walking into a manufacturer's facility, but by compromising a component supplier two tiers removed from the final integrator. Fraudulent certification is how a corporation does it — not by breaking the law openly, but by submitting test results that do not accurately reflect the system as deployed. Authorization of tampering is how a covered officer does it — by setting targets and timelines that everyone in the organization understands cannot be met with the governance architecture intact, without ever saying explicitly "remove the safety constraint."
Every one of these methods has a real-world precedent in other safety-critical industries. Every one of them will be attempted. The statute names them all because what is not named will be argued to be permitted.
The history of corporate safety violations in America is a history of corporations paying fines and executives keeping their jobs and their compensation. The fine is a cost of doing business. The executive who made the decision that led to the fine receives a bonus for the years in which the decision was profitable before it became a liability. This cycle does not change behavior. It prices it.
Personal criminal liability for covered officers changes the calculation in the only way that actually works: the person who makes the decision bears the personal consequence of the decision. No indemnification. No fine paid by the corporation. No deferred prosecution agreement that requires the corporation to improve its practices while the individuals who set those practices retain their positions and their freedom. The executive who authorizes the removal of a safety constraint from an autonomous system fleet must understand, before making that decision, that the next phone call may be from a federal prosecutor.
In the old West, the punishment for horse theft matched the severity of the dependency. The horse was survival. An autonomous system operating among human beings is survival infrastructure. The executive who disables its safety architecture to gain a competitive edge has not committed a white-collar offense. They have put a weapon in a crowd and walked away. The penalty must match what they have done.
The research exemption is the mechanism by which every safety regulation in technology has been hollowed out. The argument is always the same: prohibiting this conduct will prevent legitimate research into how to improve safety. The research exemption, once created, becomes the exception that swallows the rule. Security researchers use it to justify publishing working exploit code. Manufacturers use it to justify deploying undertested systems in the field. Defense contractors use it to justify classified programs that never receive independent safety review.
The answer to the legitimate concern is straightforward: research into autonomous systems safety that requires testing governance architecture vulnerabilities can be conducted in isolated laboratory environments on systems that are not in operational deployment. The prohibition in § 2806(a) applies to autonomous systems that are or could be operational. It does not prohibit studying how governance architectures can fail. It prohibits actually making them fail in systems that could harm people. That line is not difficult to draw. The difficulty is the will to hold it.
The Boeing 737 MAX did not become a crisis when the MCAS system failed. It became a crisis when the evidence emerged that people inside the company knew it could fail, and the information did not reach the people who needed it. The pattern is not unique to Boeing. It is the standard pattern of corporate safety failures: the people closest to the problem know first, the information travels up the organization slowly if at all, the decision-makers who could act on it are insulated from it by the layers of management between them and the engineers, and by the time the failure is public it has already happened.
Mandatory disclosure at 72 hours and whistleblower protection with criminal penalties for retaliation are the structural answer to this pattern. They do not require good faith from the corporation. They create a legal obligation to disclose and a protected channel for the people who know the truth to deliver it outside the corporate chain of command. The engineer who discovers that a firmware update has degraded a safety constraint must be able to report that to the FASA without losing their job, their clearance, or their career. The statute makes that possible. The criminal penalty for retaliation makes it real.
The three documents are designed to be read together and argued together. A legislator introducing Amendment XXVIII can point to this Act as evidence that the constitutional principle is not abstract — it has a specific, drafted, defensible implementing statute ready to move the moment the Amendment is ratified. A prosecutor making the case for this statute can point to Amendment XXVIII as evidence that the criminal prohibition it creates is not merely a policy choice of a particular Congress but a permanent constitutional commitment of the American people.
The connection between the constitutional framework and the criminal statute is the connection between the principle and the consequence. The principle without the consequence is a warning label. The consequence without the principle is a regulation waiting to be rolled back. Together, they are the architecture that the moment requires — structural, durable, and designed from the beginning to be stronger than the forces that will inevitably be brought against it.
This is serious because what it governs is serious. A machine that moves among human beings without a human hand on it in the moment of action is a transfer of power from persons to systems. The question of who governs that transfer — who sets the limits, who enforces them, and what happens to the person who removes them — is not a technical question. It is the oldest political question there is: who decides, who is accountable, and what is the consequence when accountability fails.
The answer this framework gives is the American answer: constitutional rights, enforced by law, with consequences that attach to the individuals who violate them. Not warnings. Not guidelines. Not industry best practices. Law.
This Act is the third document in the KI Constitutional AI Governance Series, following Proposed Amendment XXVIII (The Autonomous Systems Sovereignty and Safety Amendment) and Proposed Amendment XXIX (The Personal Data Sovereignty Amendment). All three documents may be reproduced freely for legislative, academic, or advocacy purposes with attribution to Kavanagh Industries LLC. The live proof-of-concept case study grounding this series is at kavanaghind.com/three-laws-proof. The fourth document in this series — the Personal Data Sovereignty Act, implementing Amendment XXIX — is in preparation.
Kavanagh Industries · Always on