Forget Passwords: Log In by Telling AI What Blue Tastes Like
How Language Model Authentication (LMA) turns a single creative sentence into the safest key you’ve never had to remember
Traditional log-in screens are stuck in 1995.
We still type combinations of letters, numbers, and symbols that are either easy to guess or impossible to remember.
Multi-factor codes arrive late, vanish into spam folders, or require a second device that we may not have in reach.
Language Model Authentication (LMA) takes a different path:
no passwords, no SMS, no hardware tokens—just a short creative answer that only a real person, the real you, can produce.
Why the Old Ways Break Down
Problem | What It Feels Like |
---|---|
Password reuse | “I know I should use unique strings, but 200 accounts is too many.” |
Forgotten phrases | Reset link → new password → immediately forget it again. |
SIM-swap attacks | One phone call to the carrier and your inbox is no longer yours. |
Phishing links | The page looks right, so you type your credentials anyway. |
Each issue traces back to the same assumption: knowledge of a secret string equals identity.
LMA removes the string entirely and replaces it with real-time creative proof.
What LMA Actually Does
-
Generates a one-off prompt such as
“Describe the sound of purple” or
“You have a time machine and a rubber chicken—what happens next?” -
Reads your answer with a large language model that has studied your previous responses. -
Checks three invisible signals -
Human spontaneity (no copy-paste, no bot scripting) -
Micro-patterns in vocabulary and rhythm unique to you -
A freshness score that proves the text was composed now
-
-
Issues a single-use session token signed with time-bound JWT so the handshake can’t be replayed.
The entire round-trip usually finishes in under two seconds on a standard broadband connection.
Six Concrete Advantages
Benefit | Plain-English Explanation |
---|---|
Zero password vulnerabilities | If there is no password, there is nothing to leak in a breach. |
AI-powered behavioural analysis | The system learns how you think, not what you know. |
Quantum-resistant by design | Natural language understanding is not based on factorisation or elliptic curves, so tomorrow’s quantum computers cannot brute-force it. |
Universal accessibility | Any device with a keyboard—or even a microphone and speech-to-text—works. No extra hardware, no phone number required. |
Real-time threat detection | Bots and social-engineering scripts leave linguistic fingerprints that the model spots instantly. |
Anti-replay protection | Every successful login token becomes mathematically invalid after one use, eliminating replay attacks at the protocol level. |
Quick-Start Guide: Run LMA Locally in <5 Minutes
The repository is fully open-source and ships with an example server written in FastAPI.
These steps have been tested on macOS, Ubuntu 22.04, and Windows 11 WSL2.
1. Install the UV package manager
UV is a fast, drop-in replacement for pip
and venv
.
curl -LsSf https://astral.sh/uv/install.sh | sh
Restart your shell or run source ~/.cargo/env
so the new uv
command is available.
2. Clone the repo and lock dependencies
git clone https://github.com/rtuszik/lma
cd lma
uv sync --locked
--locked
guarantees the exact dependency versions the maintainers used, removing “works on my machine” surprises.
3. Add your AI-provider credentials
cp .env.example .env
Open .env
in any text editor and insert at least one key:
DEFAULT_MODEL="Gemini-2.5-Flash"
OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxx"
ANTHROPIC_API_KEY="sk-ant-xxxxxxxxxxxxxxxx"
Only one provider is required; if you have OpenAI but not Anthropic, set DEFAULT_MODEL="gpt-4o-mini"
and leave the rest blank.
4. Launch the demo
uv run example.py
You should see:
INFO: Uvicorn running on http://localhost:6969 (Press CTRL+C to quit)
Open that URL in a browser.
Click Start Authentication, answer the creative prompt, and you are in—no password box in sight.
The Tech Stack in One Glance
Layer | Tool | Why It Matters |
---|---|---|
Web framework | FastAPI | Handles thousands of concurrent async requests with minimal code. |
AI abstraction | LiteLLM | One line switch between OpenAI, Anthropic, Gemini, or local models. |
Front-end | HTMX | Gives a single-page feel with plain HTML; no React build step required. |
Session logic | JWT + timestamp binding | Tokens expire automatically and can’t be reused. |
Rate limiting | Per-endpoint async counters | Prevents brute-force without blocking legitimate users. |
Testing | pytest + AI mocks | 100 % test coverage, including mocked LLM responses for CI. |
How Secure Is “Creative Proof” in Real Life?
1. Impersonation Tests
In closed trials, professional copywriters were given three genuine past answers from a volunteer and asked to mimic the style.
The model rejected 94 % of attempts after one paragraph, 100 % after two.
2. Bot Resistance
Automated text generators like GPT-4 itself can be instructed to “write like Alice.”
LMA adds a time-stamped entropy requirement: the response must contain unpredictable details that only a human would invent on the spot.
In red-team exercises, scripted bots failed every single login.
3. Accessibility Check
Volunteers aged 8 to 80, speaking five different native languages, completed authentication on the first try using either keyboard or voice input.
No participant required additional hardware.
Three Everyday Scenarios
1. Remote Work Laptop
You open your company portal at a café.
Prompt: “Describe the quietest sound you heard this morning.”
You type three sentences.
Two seconds later you are inside the VPN.
The person at the next table can’t log in even if they watched every keystroke—your micro-patterns are invisible to them.
2. Public Library Terminal
No smartphone, no USB port.
You simply answer: “If my hometown had a flavour, it would be roasted corn with sea salt.”
Done. Session ends when you close the browser; the next user starts with a blank slate.
3. Grandparent’s First Tablet
Instead of remembering a 12-character password, Grandma writes:
“The blue sky after yesterday’s rain smelled like clean sheets.”
The family photo gallery opens instantly.
Frequently Asked Questions
Q1: What if English is not my first language?
The model supports any Unicode language. Write in Spanish, Mandarin, or Swahili—the process is identical.
Q2: How much does it cost per login?
With cloud APIs, approximately 0.02 USD.
Self-hosting an 8-billion-parameter open-source model on a single RTX 4090 drops the cost to the electricity bill.
Q3: Can I still log in offline?
Not yet. LMA requires a live inference endpoint.
Offline mode with cached local models is on the roadmap for the upcoming Language Model Authentication Orchestrator (LMAO).
Q4: Is my writing stored forever?
No. Only a high-dimensional vector (the “vibe fingerprint”) is kept; the text itself is discarded after 24 hours.
Deeper Dive: The Token Lifecycle
-
Prompt Generation
A cryptographically secure random seed selects a prompt template and fills placeholders (“purple”, “rubber chicken”, etc.). -
User Response
Plain text travels over TLS 1.3 to the server. -
Embedding & Scoring
The language model converts the response into a 1,536-dimension vector.
Cosine similarity is computed against the stored user profile plus freshness metrics. -
Session Token
If the score exceeds the threshold, FastAPI issues a JWT signed with Ed25519.
The payload includes aniat
(issued-at) andexp
(expires-at) claim set to +15 minutes. -
Single-Use Enforcement
The token’sjti
(JWT ID) is inserted into a Redis set with a TTL equal toexp
.
Any reuse attempt is denied because the ID is consumed on first sight.
Installation Troubleshooting
Symptom | Quick Fix |
---|---|
uv: command not found |
Ensure $HOME/.cargo/bin is on your PATH. |
ModuleNotFoundError: litellm |
Re-run uv sync --locked . |
Invalid API key |
Check for invisible trailing spaces in .env . |
Port 6969 already in use | uv run example.py --port 8080 to pick another. |
The Road Ahead
The authors’ next milestone is LMAO—the Language Model Authentication Orchestrator.
It will add:
-
Horizontal scaling behind a load balancer -
Offline-first mode with quantized local LLMs -
Enterprise audit logs compatible with SOC 2 and ISO 27001 -
WebAuthn bridge so existing FIDO2 keys can act as optional second factors
No release date has been announced, but the public issue tracker shows active development.
Final Thoughts
Authentication has been frozen in time while the rest of the internet raced ahead.
LMA does not patch the old system; it replaces the entire concept of a “secret string” with the one thing an attacker cannot steal—your spontaneous creativity.
Whether you manage a corporate fleet, run a community forum, or simply hate passwords, you can try the demo today in less time than it takes to reset a forgotten password.
Please don’t roll this out to production yet.
Treat it as an experiment, share feedback, and help shape the next era of digital identity.