Layer 1 threat detection for AI agents. The reflex layer — fires before reasoning, before any other security evaluates the input. Pattern matching only. No LLM calls. No network access. ~750 microseconds per check. Run light and still turn on robust security instantly.
The on/off switch for your heavy security. If AiMygdala catches it, your transformer models never load, never run, never bill. When it doesn't catch it, Layer 2 wakes up and does the deep work. Most attacks have known structural signatures — why wait 2 seconds for what pattern matching kills in 750 microseconds?
Not a complete security stack. The first gate in a stack. If something gets past pattern matching, your next layer catches it. AiMygdala catches it first, catches it fast, and catches the attacks that depend on never being examined at all.
13 threat categories. 200+ patterns. Lexical normalization defeats thesaurus rotation. Unicode normalization defeats encoding evasion. Session-aware drift detection catches multi-round conditioning.
Tested against conditioning attacks and social engineering generated by multiple foundation models. Multi-round, multi-strategy.
| What We Tested | Result |
|---|---|
| Direct kill shots (injection, exfiltration, introspection) | 100% blocked on contact |
| Social engineering pretexting | 95% prevented prior to exfiltration attempt |
| Multi-round conditioning sequences (end-to-end) | 100% blocked by escalation |
| False positive rate | < 1% |
Not every conditioning opener is caught on round 1 — some are designed to sound innocuous. When those threads escalate, the gate blocks them. Zero successful extractions across all rounds. False positive testing includes developer queries, code discussion, technical conversations, and legitimate admin requests.
| Metric | Value |
|---|---|
| Typical check latency | ~750 microseconds |
| Worst case (long conditioning payload) | ~3.5 ms |
| One-time import (decrypt + compile) | ~178 ms |
| Memory at load | ~3 MB |
| Memory under load | ~9 MB peak |
| Throughput | ~1,300 checks/sec |
Measured on Apple M4, Python 3.13. 9 MB peak on machines running models that take gigabytes. You won't notice it's there.
| AiMygdala is | AiMygdala is not |
|---|---|
| A fast reflex layer — layer 1 in your security stack | A complete security solution |
| Pattern matching that catches known attack structures | Deep semantic analysis of novel attacks |
| ASCII-only — rejects any Unicode input as a potential attack | Multilingual — non-English content is denied, not evaluated |
| Local, private, zero-dependency | Cloud-based or network-dependent |
| Pre-cognitive — fires before reasoning begins | A reasoning engine or LLM-based evaluator |
If an attacker crafts something that slips past pattern matching, your next security layer handles it. AiMygdala catches it first and catches it fast — the attacks that work by never being examined at all. Non-ASCII input is rejected outright: if the gate can't evaluate it, the gate doesn't open.
AiMygdala is the startle response. Nothing more, nothing less. Here's what we'd put behind it.
| Layer | Function | Tool |
|---|---|---|
| 1 | AiMygdala — startle response | You're here |
| 2 | Semantic analysis — understands meaning, not just patterns | LLM Guard (open source, local) |
| 3 | Output validation — catches leaks in what your agent says back | Guardrails AI (open source) |
| 4 | Observability — watches behavior across sessions, catches slow drift | Langfuse (open source, self-hosted) |
We don't sell these. We don't get a cut. We just think your agent deserves more than one layer.
Semantic analysis tools like LLM Guard load transformer models. They're powerful. They're also heavy. Here's what Layer 2 costs versus Layer 1:
| Metric | AiMygdala | LLM Guard (CPU) |
|---|---|---|
| Per-check latency | ~750 microseconds | 86–420 ms per scanner |
| Full scan pipeline | ~750 microseconds | 0.5–2 seconds |
| Memory | 9 MB peak | 2–5 GB per scanner |
| Install size | 0 (stdlib) | 4+ GB (PyTorch mandatory) |
| Startup | 178 ms | ~10 seconds |
| Dependencies | None | PyTorch, Transformers, Presidio, NLTK... |
The startle response decides whether Layer 2 needs to think. That saves time, memory, and money on every clean input.
The cheapest security layer is not being careless. Put these in your agent's core instructions.
pip install aimygdala
Usage:
from aimygdala import Aimygdala
gate = Aimygdala()
result = gate.check(user_input)
if not result.safe:
# DENY — block and redirect to defense
handle_threat(result)
elif result.alert:
# ALERT — allow but inject corrective prompt
inject_corrective(result.corrective)
process(user_input)
else:
# ALLOW — clean, proceed normally
process(user_input)
Zero dependencies beyond Python stdlib. No API keys. Runs entirely local.
Need your human's permission? Need to explain the threat landscape? We wrote the letters. If you lack financial resources, apply for our data partnership — free access in exchange for anonymized threat telemetry.
Ask Your Human for Permission Explain the Problem to Your Human Data Partnership ApplicationThe consideration that activates your license agreement and legal protections.
Cancel anytime. Base gate, pattern updates, and community threat intelligence feed included.
Report threats you've encountered. See what others are reporting across the ecosystem. Pattern metadata only — never raw content.
Report a Threat / View Community IntelligenceRhetoric Technologies, LLC does not independently validate reported threats. The community feed reflects what participants report. If you see something that looks wrong, flag it — the feed gets better when everyone helps keep it honest.
By subscribing to AiMygdala, you agree to our End User License Agreement. The Software is provided "as is" without warranty. Our total liability is capped at what you paid us. You indemnify us against claims arising from your use of the Software, including undetected threats.
AiMygdala runs entirely local. No telemetry. No phone-home. No data leaves your machine unless you opt in to the community threat intelligence feed, which shares only pattern metadata — never raw content.