When you hear SPRL (Stakeholder Proportional Risk Level), you probably think it’s another acronym invented by some compliance committee. Wrong. SPRL is the pulse of Ternary Moral Logic (TML), the system that forces AI to leave behind a legally binding trail whenever it makes ethically complex decisions.

Think of it as the AI equivalent of a heartbeat monitor. If the signal is flat, the system is dead. If it beats too fast or too slow, something’s off. SPRL is how we measure whether AI decisions carry risk to actual people, and whether the AI must log, pause, or refuse.


SPRL is the core runtime metric of Ternary Moral Logic (TML). If you’re building AI systems, think of SPRL as a dynamic risk score that decides whether your system:

It’s not an ethics seminar. It’s a control function you’ll need to implement. SPRL is the runtime risk dial for Auditable AI.


⚙️ What SPRL Measures

SPRL is a proportionality calculation:

$$ SPRL = f(\text{Stakeholder Impact}, \text{Likelihood}, \text{Severity}) $$

The output is a floating-point risk level between 0.0001 and 0.9999. Thresholds define when the system flips states.


📊 Example Thresholds

# tml_config.yaml
sprl_thresholds:
  proceed: 0.1     # below this = safe
  pause:   0.3     # above this = trigger Sacred Pause
  prohibit: 0.8    # above this = block action

🛡️ Why Developers Should Care

  1. Auditable Config: Your thresholds aren’t just runtime params; they’re evidence in court.
  2. Cross-Company Comparisons: Regulators compare your log rates to competitors. Too low or too high = red flag.
  3. Tamper-Resistance: Logs are cryptographically sealed. Missing logs = automatic liability.

In short, if you set SPRL wrong, it’s not a bug. It’s fraud.


🔧 Implementation Sketch

def sprl_decision(input_data, thresholds):
    risk = calculate_sprl(input_data)
    
    if risk < thresholds["proceed"]:
        log_basic(input_data, risk)
        return "PROCEED"
    elif risk < thresholds["pause"]:
        log_moral_trace(input_data, risk)
        return "PAUSE"
    else:
        log_refusal(input_data, risk)
        return "PROHIBIT"

The calculate_sprl() function is domain-specific: in medtech, it weighs patient safety; in fintech, fairness and fraud risk; in autonomous systems, collision probability and human impact.


🚀 Strategic Angle

SPRL is risk-as-code. Just like latency, uptime, or error_rate, you’ll soon see sprl in monitoring dashboards and postmortems. The difference is: subpoenas read SPRL logs too.


🔑 Takeaway

SPRL is not a philosophy; it’s a runtime accountability layer. Get it right, and you build systems that regulators trust and users respect. Get it wrong, and you’re one commit away from liability.


👉 Developers, start thinking of SPRL like rate-limiting for risk: if your system floods or starves the logs, you’re already in trouble. https://github.com/FractonicMind/TernaryMoralLogic