BiasClear combines deterministic pattern detection with governed AI analysis to identify rhetorical manipulation — causal totalization, manufactured consensus, authority substitution, false urgency — and explains exactly how the text is engineered to influence the reader. Every scan is auditable, explainable, and cryptographically chained.
Paste any text — legal brief, news article, financial report — and see what BiasClear finds.
Three examples showing range, precision, and restraint. BiasClear flags structural distortion — and knows when not to.
"It is well-settled law that these claims are plainly meritless and should be dismissed with sanctions."
Detects settled-law dismissal, merit-based attack, and sanctions threat — three patterns commonly used to discourage opposition in adversarial filings.
"[Entity] is ruining everything this country stands for."
Fires CAUSAL_TOTALIZATION regardless of the entity — Trump, Biden, boss, or media. Identity-neutral detection of totalizing blame attribution. Validated by 32 symmetry and boundary tests.
"The policy caused me to lose Medicaid coverage on March 1 because the eligibility rules changed."
Specific, bounded, dated, with a concrete mechanism. No totalizing language, no unbounded harm claims. BiasClear correctly leaves factual causal statements alone.
Five-stage governance pipeline. Every scan passes through server-side validation before results reach the user.
Input is validated, sanitized, and routed to the appropriate domain engine (Legal, Media, Financial, or General).
42 deterministic structural patterns run against the text — 22 base plus domain-specific rules for legal, media, and financial text. No ML weights. Same input → same output, every time.
LLM-powered analysis runs under the frozen core's governance principles via a provider-flexible runtime (currently AWS Bedrock / Anthropic Claude). It sees what the rules can't — novel manipulation, contextual framing, implicit bias.
Both layers merge into a weighted Integrity Score. Penalties scale with severity and PIT tier depth. The engine can't inflate or deflate scores.
Every scan is SHA-256 hash-chained to the previous entry. Tamper-evident. Provable. The result is returned with full lineage attached.
Three layers working together — deterministic core, governed learning, cryptographic proof.
42 structural patterns across 4 domains, encoded as deterministic code — not ML weights. Deterministic core reduces drift and limits prompt-injection exposure. Same input, same output, every time.
LLM-proposed patterns go through a governed lifecycle — staged, confirmed, activated — under strict rules. The core expands without compromising integrity.
Every scan, correction, and governance decision is SHA-256 hash-chained. Tamper-evident. Provable. What regulators need.
Built on Persistent Influence Theory — a three-tiered model of ideological, psychological, and institutional manipulation grounded in five principles.
Every scan produces a 0–100 Integrity Score — penalties applied for each detected manipulation pattern, weighted by severity and PIT tier.
Beta calibration corpus with precision, recall, and F1 per pattern. Regression guards. Weight optimizer. The engine tests itself.
Purpose-built patterns for the text that matters most.
Settled-law dismissals, merit attacks, sanctions threats, straw man arguments, procedural gatekeeping, weight stacking. Built from real opposing counsel briefs.
Editorial-as-news framing, anonymous attribution, weasel quantifiers, false balance, emotional leads, buried qualifiers, selective quotation.
Survivorship bias, anchoring, cherry-picked timeframes, projection-as-fact, recency extrapolation. Catches the rhetoric behind the numbers.
Consensus-as-evidence, claims without citation, dissent dismissal, false binaries, fear urgency, shame levers, credential-as-proof, and more.
REST API or direct HTTP. Scan any text with a few lines of code. Python client SDK available on PyPI.
Open source. Free during beta. Enterprise when you need it.
Request a beta API key for developer access, onboarding details, and current usage limits. We use your email only to respond about BiasClear access. See our Privacy Policy.
BiasClear was created in 2025 by Bradley Slimp, an independent researcher and operations executive based in Boerne, Texas. With 20+ years of experience in operations management and retail banking — including 11 years at Wells Fargo managing multi-site operations and nine-figure asset portfolios — Brad identified a gap in the AI safety stack: no tool existed to audit the structural persuasion patterns in AI-generated text.
BiasClear is built on the Persistent Influence Theory (PIT) framework, a preprint published on Zenodo and SSRN (DOI: 10.5281/zenodo.18676405). PIT has not yet undergone formal peer review. The tool is open-source under AGPL-3.0 and designed with alignment toward emerging AI governance frameworks including the Colorado AI Act (SB 205) and the EU AI Act. Development supported by AWS Activate.
Everything you need to evaluate BiasClear in 5 minutes.
326 passing tests · 32 symmetry and boundary tests · 118-sample calibration corpus · SHA-256 audit chain · Live since February 2026