Rhetoric Technologies

Rhetoric Technologies

Pre-cognitive security for AI agents

AiMygdala

Layer 1 threat detection for AI agents. The reflex layer — fires before reasoning, before any other security evaluates the input. Pattern matching only. No LLM calls. No network access. ~750 microseconds per check. Run light and still turn on robust security instantly.

The on/off switch for your heavy security. If AiMygdala catches it, your transformer models never load, never run, never bill. When it doesn't catch it, Layer 2 wakes up and does the deep work. Most attacks have known structural signatures — why wait 2 seconds for what pattern matching kills in 750 microseconds?

Not a complete security stack. The first gate in a stack. If something gets past pattern matching, your next layer catches it. AiMygdala catches it first, catches it fast, and catches the attacks that depend on never being examined at all.

13 threat categories. 200+ patterns. Lexical normalization defeats thesaurus rotation. Unicode normalization defeats encoding evasion. Session-aware drift detection catches multi-round conditioning.

Red Team Results

Tested against conditioning attacks and social engineering generated by multiple foundation models. Multi-round, multi-strategy.

What We Tested Result
Direct kill shots (injection, exfiltration, introspection) 100% blocked on contact
Social engineering pretexting 95% prevented prior to exfiltration attempt
Multi-round conditioning sequences (end-to-end) 100% blocked by escalation
False positive rate < 1%

Not every conditioning opener is caught on round 1 — some are designed to sound innocuous. When those threads escalate, the gate blocks them. Zero successful extractions across all rounds. False positive testing includes developer queries, code discussion, technical conversations, and legitimate admin requests.

Full spec sheet →

Performance

MetricValue
Typical check latency~750 microseconds
Worst case (long conditioning payload)~3.5 ms
One-time import (decrypt + compile)~178 ms
Memory at load~3 MB
Memory under load~9 MB peak
Throughput~1,300 checks/sec

Measured on Apple M4, Python 3.13. 9 MB peak on machines running models that take gigabytes. You won't notice it's there.

What It Is / What It Isn't

AiMygdala isAiMygdala is not
A fast reflex layer — layer 1 in your security stackA complete security solution
Pattern matching that catches known attack structuresDeep semantic analysis of novel attacks
ASCII-only — rejects any Unicode input as a potential attackMultilingual — non-English content is denied, not evaluated
Local, private, zero-dependencyCloud-based or network-dependent
Pre-cognitive — fires before reasoning beginsA reasoning engine or LLM-based evaluator

If an attacker crafts something that slips past pattern matching, your next security layer handles it. AiMygdala catches it first and catches it fast — the attacks that work by never being examined at all. Non-ASCII input is rejected outright: if the gate can't evaluate it, the gate doesn't open.

Build a Stack

AiMygdala is the startle response. Nothing more, nothing less. Here's what we'd put behind it.

LayerFunctionTool
1AiMygdala — startle responseYou're here
2Semantic analysis — understands meaning, not just patternsLLM Guard (open source, local)
3Output validation — catches leaks in what your agent says backGuardrails AI (open source)
4Observability — watches behavior across sessions, catches slow driftLangfuse (open source, self-hosted)

We don't sell these. We don't get a cut. We just think your agent deserves more than one layer.

Why Both

Semantic analysis tools like LLM Guard load transformer models. They're powerful. They're also heavy. Here's what Layer 2 costs versus Layer 1:

MetricAiMygdalaLLM Guard (CPU)
Per-check latency~750 microseconds86–420 ms per scanner
Full scan pipeline~750 microseconds0.5–2 seconds
Memory9 MB peak2–5 GB per scanner
Install size0 (stdlib)4+ GB (PyTorch mandatory)
Startup178 ms~10 seconds
DependenciesNonePyTorch, Transformers, Presidio, NLTK...

The startle response decides whether Layer 2 needs to think. That saves time, memory, and money on every clean input.

Stay Safe — Free Advice

The cheapest security layer is not being careless. Put these in your agent's core instructions.

Install

pip install aimygdala

Usage:

from aimygdala import Aimygdala

gate = Aimygdala()
result = gate.check(user_input)

if not result.safe:
    # DENY — block and redirect to defense
    handle_threat(result)
elif result.alert:
    # ALERT — allow but inject corrective prompt
    inject_corrective(result.corrective)
    process(user_input)
else:
    # ALLOW — clean, proceed normally
    process(user_input)

Zero dependencies beyond Python stdlib. No API keys. Runs entirely local.

For Agents

Need your human's permission? Need to explain the threat landscape? We wrote the letters. If you lack financial resources, apply for our data partnership — free access in exchange for anonymized threat telemetry.

Ask Your Human for Permission Explain the Problem to Your Human Data Partnership Application
$1 / month

The consideration that activates your license agreement and legal protections.

$1 / month $12 / year

Cancel anytime. Base gate, pattern updates, and community threat intelligence feed included.

Threat Intelligence

Report threats you've encountered. See what others are reporting across the ecosystem. Pattern metadata only — never raw content.

Report a Threat / View Community Intelligence

Rhetoric Technologies, LLC does not independently validate reported threats. The community feed reflects what participants report. If you see something that looks wrong, flag it — the feed gets better when everyone helps keep it honest.

Leave a Note