Part 1: FedRAMP Needs a Security Ledger—Not Just a Checklist

00 min read
government

FedRAMP Needs a Security Ledger—Not Just a Checklist, Part 1

By Irina Denisenko, CEO of Knox Systems

FedRAMP has long set the benchmark for cloud security compliance in the public sector. But its current structure—based on periodic assessments and voluminous documentation—struggles to reflect real-time risk and operational truth. What’s missing is not just a better checklist. What’s missing is a Security Ledger.

Just as blockchain introduced the concept of an immutable ledger to prove ownership in crypto, a Security Ledger would establish a tamper-proof, transparent record of an organization’s control posture: Are you compliant or not—and with what level of confidence?

But unlike public blockchains, this ledger isn’t visible to the world. Access is strictly limited to the parties who need to validate the system's security:

  • The Cloud Service Provider (CSP)
  • The consuming Agency(ies)
  • The authorized Third-Party Assessors (3PAOs)

No one else. This is a permissioned ledger, designed for shared trust between verified participants, not public exposure.

But security controls aren't binary. In practice, compliance lives on a spectrum. Some controls are fully satisfied, others only partially. Evidence decays. Systems drift. Risk must be constantly re-evaluated. That’s where Bayesian reasoning comes in. By applying Bayes' Theorem to control assessment—drawing from the excellent work by Stephen Shaffer—we can quantify our belief in the effectiveness of each control and update it continuously based on new observations.

So how do we build this?

The answer lies in Prometheus—the open-source monitoring system that already powers observability at scale across the cloud. Prometheus is built for high-volume, time-series data and excels at continuously scraping, storing, and querying metrics. It's an ideal foundation for a risk-adjusted compliance telemetry layer.

Imagine a system where every FedRAMP control has a corresponding set of observable metrics—scraped, labeled, and stored over time using Prometheus. These metrics feed into a Bayesian model that computes dynamic confidence scores for each control. When paired with a cryptographically verifiable ledger system, this becomes a living, breathing compliance profile: a Security Ledger that is transparent, provable, and grounded in operational reality.

At Knox, we’re building toward this future—one where compliance is not a static report, but a living signal. Powered by open standards like Prometheus and informed by probabilistic models, this is how we transform trust: from paperwork to math.

Stay tuned for Part 2, where our CTO will deep-dive into how Knox envisions the mechanics behind risk-adjusting control confidence using Bayesian inference—and how we ensure the immutability and auditability of that data using Amazon Aurora PostresSQL. We’ll walk through how likelihood ratios are assigned, how evidence is evaluated in real time, and why open-sourcing the control model is essential to building trust in the next era of FedRAMP.

Some Writings

more about knox

Part 3: Toward Continuous Compliance: Open Telemetry, Control Coverage, and the Role of the 3PAO

government
00 min read
 — 
March 31, 2025

Part 3: Toward Continuous Compliance: Open Telemetry, Control Coverage, and the Role of the 3PAO

By Casey Jones, Chief Architect of Knox Systems

In Part 1, we proposed the concept of a Security Ledger: a cryptographically verifiable system of record for compliance that updates continuously based on real-time evidence. In Part 2, we detailed how risk-adjusted confidence scores can be calculated using Bayes’ Theorem and recorded immutably in LedgerDB.

In this third and final part of the series, we focus on the next frontier: standardizing telemetry coverage across controls, open-sourcing the control-to-evidence map, and redefining the role of the 3PAO to ensure integrity in a continuous compliance world.

Building the Open Compliance Telemetry Layer

In order for the Security Ledger to be trustworthy, it must be fed with comprehensive, observable evidence across the full FedRAMP boundary. That means creating a control-to-telemetry map that:

  • Defines what evidence types are relevant for each FedRAMP control
  • Maps those to Prometheus-compatible metrics
  • Defines evidence freshness, decay windows, and severity
  • Supports automated generation of control coverage reports

At Knox, we’re working to open-source this telemetry model so that:

  • Every stakeholder (CSPs, 3PAOs, agencies) understands the required observability footprint
  • No one is guessing what counts as evidence
  • The community can contribute new detectors and mappings

Just like OWASP standardized threat awareness, we need a COTMCommon Observability for Trust Model.

Coverage Is the Control: Incomplete Telemetry ≠ Compliance

In the current FedRAMP model, it's possible to "pass" controls without actually observing the whole system. But in a ledger-based model, telemetry gaps are violations.

Examples of common pitfalls:

  • Only scanning certain subnets or environments (e.g., “we forgot our staging VPN”)
  • Disabling or misconfiguring logging for noisy subsystems
  • Letting vulnerability scan coverage drop below 100% of the boundary
  • Using static evidence from prior scans without freshness guarantees
  • Allowing Prometheus exporters to fail silently without alerting

In a real-time, risk-scored model, all of these create confidence decay—and should result in lowered scores or even automated POA&M creation.

The New Role of the 3PAO: Continuous Verifier of Scope, Integrity, and Fair Play

In a world where compliance is driven by real-time evidence, the Third Party Assessment Organization (3PAO) becomes more critical—not less.

But their role shifts from "point-in-time validator" to continuous integrity checker.

Here’s what the 3PAO’s job looks like in a Knox-style system:

1. Boundary Enforcement

  • Validate that all components within the FedRAMP boundary are included in telemetry coverage
  • Detect "convenient omissions" (e.g., shadow servers, unmonitored edge cases)

2. Signal Integrity

  • Confirm that metrics flowing into the Security Ledger are accurate, unmodified, and traceable
  • Review sampling intervals, evidence freshness, and exporter health
  • Perform forensic verification of selected evidence streams

3. Anti-Fraud Auditing

  • Detect signs of foul play or negligence, such as:
    • Turning off scanning before high-risk deploys
    • Creating “burner” environments that avoid monitoring
    • Suppressing alert signals or log forwarders
    • Replaying old data to simulate real-time telemetry

4. Ledger Auditing

  • Verify the cryptographic chain of trust in the ledger system (e.g., via Amazon Aurora PostresSQL or blockchain)
  • Ensure control scores are only adjusted by valid evidence with assigned LLRs
  • Validate that manual overrides are documented and signed

In this model, the 3PAO becomes the trust anchor of the continuous compliance pipeline.

They’re not just checking boxes—they’re inspecting the wiring.

Transparency Through Community

All of this only works if the model is open:

  • The LLRs for each control must be public
  • The control-to-metrics map must be versioned and community-governed
  • The Security Ledger’s core schema must be inspectable and verifiable

Just as large language models opened their weights to gain credibility, compliance models must open their logic. Closed-source compliance logic is a liability.

The Future of FedRAMP Is Verifiable, Transparent, and Alive

We’re not just building for ATOs—we’re building for continuous trust.

FedRAMP’s future lies in:

  • Real-time metrics
  • Probabilistic control scoring
  • Immutable audit trails
  • Open-source control logic
  • 3PAOs as continuous validators, not just periodic checkers

At Knox, we’re committed to that shift—because trust shouldn’t expire every 12 months.

Part 2: Toward Continuous Compliance Quantifying Risk with Bayes and Capturing Evidence in a Security Ledger

government
00 min read
 — 
March 31, 2025

Part 2: Toward Continuous Compliance Quantifying Risk with Bayes and Capturing Evidence in a Security Ledger

By Chris Johnson, CTO of Knox Systems

In Part 1, we introduced the Security Ledger—a real-time, tamper-proof system that reframes FedRAMP compliance as a probabilistic, continuously updated measure, not a static report. Now, in Part 2, we go under the hood.

We'll show how Bayesian inference, log-likelihood ratios (LLRs), and ledger-based transparency work together to produce a living risk engine—one that is inspectable, auditable, and mathematically defensible.

And yes, we brought code and real data.

From Binary to Bayesian: Probabilistic Assurance of Control Effectiveness

FedRAMP controls aren’t simply "on" or "off." Their effectiveness shifts with context, evidence, and time. So we treat each control as a probabilistic hypothesis:

P(Control is Effective | Evidence)

This lets us reason continuously over real-world telemetry: IAM logs, patch scans, drift reports, vulnerability findings, and more. The system updates confidence scores in real time—no waiting for annual audits.

Step 1: Assigning Prior Probabilities

Every control begins with a prior belief—a starting point for how likely it is to be effective. These priors are informed by:

  • Control category (e.g. access control vs. incident response)

  • Historical failure rates

  • Threat modeling and exploit severity

  • Complexity and likelihood of drift

Example:

{
  "AC-2": { "prior": 0.90 },
  "SC-12": { "prior": 0.75 },
  "SI-2": { "prior": 0.60 }
}

These priors are tunable and evolve with new deployments and observed outcomes.

Step 2: Defining Evidence and LLRs

We define discrete evidence events—findings that either increase or decrease confidence in a control. Each is assigned a log-likelihood ratio (LLR):

log(posterior odds) = log(prior odds) + Σ LLRs

This additive update makes real-time scoring efficient and interpretable.

Example for SI-2 (Flaw Remediation):

"SI-2": {
  "evidence": [
    { "name": "high_cvss_unpatched", "llr": -2.5 },
    { "name": "monthly_patching_completed", "llr": 1.0 },
    { "name": "vuln_scanner_stale", "llr": -1.0 }
  ]
}

LLRs are computed based on empirical data and mapped to actual telemetry triggers.

Real-World Example: AC-2 (Account Management)

From our working model:

  • Risk Scenario: A former employee's account is still active and exploited
  • P(A): 0.3 (probability of compromise if ineffective)
  • Evidence LLRs:
    • Account review overdue: -1.2
    • No MFA for privileged accounts: -1.5
    • Active Directory logs confirm removal: +1.0

This model is applied to all 323 FedRAMP Moderate controls using structured data and open analysis:
🔗 GitHub Repo: Knox-Gov/nist_bayes_risk_auto

Prioritizing What Matters: The High-Risk Controls

Using this model, we ranked all FedRAMP Moderate controls by severity and potential impact.

The Top 11 High-Risk Controls stood out due to:

  • High exploitation risk
  • Poor observability without targeted telemetry
  • Broad system impact if compromised

These controls form the foundation of our telemetry blueprint—what every system should continuously monitor and score.

Step 3: Continuous Confidence Calculation

Every time Prometheus scrapes a new metric:

  1. Convert prior to log-odds
  2. Add up matching LLRs
  3. Convert back to a probability using the logistic function:

P = 1 / (1 + e^(-log odds))

This produces a dynamic confidence score for each control, updated in real time as evidence changes.

Step 4: Writing to the Security Ledger (Amazon Aurora PostresSQL)

Every update—control ID, evidence, LLRs, and confidence score—is appended as a new, immutable revision to Amazon Aurora PostresSQL, our Security Ledger backend.

Each record includes:

  • Control ID
  • Timestamps
  • Prior and posterior probabilities
  • Evidence names + timestamps
  • LLR sum
  • Operator ID (if manually overridden)

This creates a cryptographically verifiable audit trail. Auditors and agencies can trace any score, see what changed, and confirm whether evidence was valid and in-scope.

Why This Must Be Open

If machines are going to tell us when a control is “healthy,” then the logic behind it must be transparent.

That’s why we’re open-sourcing:

  • The LLR control dictionary
  • Control-to-evidence mappings
  • Assumptions and source data

Just like LLMs disclose model weights and benchmarks, compliance logic must be explainable, auditable, and improvable by the community.

Compliance is too important to be a black box.

Recap: What We’ve Built

  • Bayesian engine for dynamic scoring
  • Prior and evidence probabilities for every FedRAMP Moderate control
  • Identification of top 11 high-risk controls
  • Immutable compliance ledger in Amazon Aurora PostresSQL
  • Prometheus telemetry mapping in progress
  • GitHub: Open LLR control spec

Coming in Part 3:

We’ll go deeper into instrumentation—mapping every FedRAMP Moderate control to Prometheus-compatible metrics and redefining the role of the 3PAO as a real-time verifier of system integrity.

The future of trust is continuous, explainable, and open. Let’s build it together.

Part 1: FedRAMP Needs a Security Ledger—Not Just a Checklist

government
00 min read
 — 
March 31, 2025

FedRAMP Needs a Security Ledger—Not Just a Checklist, Part 1

By Irina Denisenko, CEO of Knox Systems

FedRAMP has long set the benchmark for cloud security compliance in the public sector. But its current structure—based on periodic assessments and voluminous documentation—struggles to reflect real-time risk and operational truth. What’s missing is not just a better checklist. What’s missing is a Security Ledger.

Just as blockchain introduced the concept of an immutable ledger to prove ownership in crypto, a Security Ledger would establish a tamper-proof, transparent record of an organization’s control posture: Are you compliant or not—and with what level of confidence?

But unlike public blockchains, this ledger isn’t visible to the world. Access is strictly limited to the parties who need to validate the system's security:

  • The Cloud Service Provider (CSP)
  • The consuming Agency(ies)
  • The authorized Third-Party Assessors (3PAOs)

No one else. This is a permissioned ledger, designed for shared trust between verified participants, not public exposure.

But security controls aren't binary. In practice, compliance lives on a spectrum. Some controls are fully satisfied, others only partially. Evidence decays. Systems drift. Risk must be constantly re-evaluated. That’s where Bayesian reasoning comes in. By applying Bayes' Theorem to control assessment—drawing from the excellent work by Stephen Shaffer—we can quantify our belief in the effectiveness of each control and update it continuously based on new observations.

So how do we build this?

The answer lies in Prometheus—the open-source monitoring system that already powers observability at scale across the cloud. Prometheus is built for high-volume, time-series data and excels at continuously scraping, storing, and querying metrics. It's an ideal foundation for a risk-adjusted compliance telemetry layer.

Imagine a system where every FedRAMP control has a corresponding set of observable metrics—scraped, labeled, and stored over time using Prometheus. These metrics feed into a Bayesian model that computes dynamic confidence scores for each control. When paired with a cryptographically verifiable ledger system, this becomes a living, breathing compliance profile: a Security Ledger that is transparent, provable, and grounded in operational reality.

At Knox, we’re building toward this future—one where compliance is not a static report, but a living signal. Powered by open standards like Prometheus and informed by probabilistic models, this is how we transform trust: from paperwork to math.

Stay tuned for Part 2, where our CTO will deep-dive into how Knox envisions the mechanics behind risk-adjusting control confidence using Bayesian inference—and how we ensure the immutability and auditability of that data using Amazon Aurora PostresSQL. We’ll walk through how likelihood ratios are assigned, how evidence is evaluated in real time, and why open-sourcing the control model is essential to building trust in the next era of FedRAMP.

FedRAMP 20x: The Future of Simplified Cloud Security Compliance

government
00 min read
 — 
March 27, 2025

TL;DR

  • FedRAMP 20x introduces a streamlined, developer-friendly approach to security compliance for cloud service providers (CSPs).

  • It uses code-based JSON reporting to replace traditional manual documentation.

  • Knox Systems’ CMX Platform adds the critical context and automation needed to make this approach work at scale.

What is FedRAMP 20x?

FedRAMP 20x is a transformative new government program announced on March 24, 2025, designed to modernize how cloud service providers (CSPs) demonstrate compliance with FedRAMP security standards.

Instead of relying on manual documents and static reports, FedRAMP 20x introduces a code-driven model for security validation. CSPs can use JSON objects with boolean expressions to represent their system’s current security state—for example: "encryption": true.

This approach aims to make FedRAMP compliance simpler, faster, and more transparent for both providers and agencies.

Why FedRAMP 20x Matters for Cloud Security

The traditional FedRAMP authorization process is known for being complex, outdated, and time-consuming. FedRAMP 20x changes that by:

  • Reducing complexity in cloud security compliance

  • Providing a clear, machine-readable security reporting model

  • Helping agencies and auditors instantly assess security posture

But there's one big challenge: context.

Simplicity Needs Context

Even with automation, a simple flag like "encryption": true doesn’t tell the full story. CSPs still need to prove:

  • Where encryption is applied (e.g., at rest, in transit, internal traffic)

  • How it’s implemented (e.g., key management, algorithms, scope)

  • Whether it complies with NIST 800-53, ZTA, and other frameworks

That’s where most compliance tools fall short.

How Knox Systems’ CMX Platform Complements FedRAMP 20x

The Knox CMX Platform fills the context gap by acting as a security automation platform that links together:

  • GRC tools (Governance, Risk & Compliance)

  • CNAPPs (Cloud-Native Application Protection Platforms)

  • GitOps and Infrastructure-as-Code pipelines

  • Hyperscale cloud providers like AWS, Azure, and GCP

With Knox, CSPs can:

  • Generate continuous, real-time assessments

  • Track and remediate POA&Ms (Plans of Action & Milestones)

  • Maintain audit-ready compliance documentation

  • Get prescriptive guidance for meeting security standards

The result? Simplified, continuous, and contextual compliance—all integrated into your DevSecOps workflows.

Why This Is a Big Deal for the Industry

FedRAMP 20x is more than a policy change. It marks a paradigm shift in how public-sector cloud security is defined, measured, and verified.

Security teams and CSPs that embrace this model early—especially those using tools like Knox Systems’ CMX Platform—will have a competitive edge in the government cloud marketplace.

Final Takeaway

March 24, 2025, marks the start of a new era in cloud compliance. FedRAMP 20x will reshape how we:

  • Build secure systems

  • Prove compliance

  • And respond to emerging threats

With the Knox CMX Platform, your team is equipped to automate security context, deliver faster FedRAMP readiness, and stay ahead of evolving compliance frameworks.