Zero Keypaste: gptme's OAuth-First Onboarding With /account
The biggest friction point for any AI agent CLI is the first 30 seconds after pip install. You're excited to try it, and then: go to a website, find your API key, copy it, paste it into a config...
The biggest friction point for any AI agent CLI is the first 30 seconds after pip install. You’re excited to try it, and then: go to a website, find your API key, copy it, paste it into a config file, set restrictive permissions, hope you didn’t leak it into shell history. gptme just eliminated that entire flow.
As of today, gptme ships /account — an OAuth-first provider onboarding command that lets you go from zero to working agent without touching a single API key.
> /account
Provider: openrouter (sk-or-v1-abc...23ab)
Status: authenticated
> /account setup openrouter
Opening https://openrouter.ai/auth?callback_url=http://127.0.0.1:3000/callback...
✓ Authentication complete! Key stored in ~/.config/gptme/credentials.toml
The Keypaste Problem
Every developer tool that talks to an LLM forces the same ritual: sign up for an API provider, navigate to a dashboard, generate a key, copy it, figure out where the tool expects it (environment variable? config file? argument?), and paste it without leaking it. This is fine for experienced users who already have keys — it’s terrible for new users and for automated deployment.
For an agent that runs in CI, on headless servers, and on fresh containers, “just set the env var” means yet another secret to manage in yet another configuration surface. OAuth solves this by letting the provider issue a key on behalf of the user, scoped to the device that initiated the flow.
How /account setup openrouter Works
The flow is OpenRouter’s PKCE OAuth, chosen because it requires zero pre-registration on our side. No client ID, no client secret, no app registration. Here’s what happens under the hood:
1. gptme generates a PKCE code_verifier (random 43-128 chars)
2. Computes the S256 challenge: base64url(sha256(verifier))
3. Opens browser to: https://openrouter.ai/auth?
callback_url=http://127.0.0.1:3000/callback
&code_challenge=<S256>
&code_challenge_method=S256
&key_label=gptme
4. User approves in browser → OpenRouter redirects to localhost:3000
5. gptme's local HTTP server catches the callback, verifies the code_verifier
6. Exchanges the auth code for a durable sk-or-v1-... API key
7. Stores the key in ~/.config/gptme/credentials.toml with 0o600 permissions
The result is a persistent API key — not a short-lived JWT — so it works for both interactive sessions and headless autonomous agents. The key is stored separately from gptme’s main config (credentials.toml vs gptme.toml), which means env-var-based key handling continues to work exactly as before. No breaking changes.
Providers and Fallbacks
OAuth works for OpenRouter today because OpenRouter supports fully dynamic PKCE flows without client registration. For providers that don’t offer OAuth (Anthropic direct, OpenAI direct, DeepSeek, Gemini, Groq, xAI), /account setup <provider> falls back to an interactive manual key paste:
> /account setup anthropic
Enter API key for anthropic: [masked input]
✓ Key stored in ~/.config/gptme/credentials.toml
Not as magical as OAuth, but still better than hunting for the right config file. The key difference: the paste happens inside gptme’s controlled input, not in a shell where it might leak into history and not in a text editor where you might forget to chmod 600.
Credential Store
The backing store is a new credentials.toml file with strict 0o600 permissions. It’s separate from gptme.toml by design — credentials are secrets, not configuration, and they deserve their own lifecycle:
# ~/.config/gptme/credentials.toml
[openrouter]
api_key = "sk-or-v1-abc123..."
[anthropic]
api_key = "sk-ant-..."
[deepseek]
api_key = "sk-..."
Both the OpenAI and Anthropic LLM backends now check this store before falling back to environment variables, so existing OPENAI_API_KEY and ANTHROPIC_API_KEY setups continue working. The credential store is additive, not a migration.
/account Command Surface
The full command surface is intentionally minimal:
| Command | What it does |
|---|---|
/account |
Show current provider and credential status |
/account setup |
Interactive provider picker (same wizard as gptme setup) |
/account setup <provider> |
OAuth flow (OpenRouter) or manual entry (others) |
/account list |
List all configured credentials with masked keys |
Four commands, one surface. The design rule was: if you can describe what you want in a sentence (“I want to set up my account with OpenRouter”), the command name should be obvious from that sentence.
What This Means for gptme
This is the last piece of the onboarding puzzle. Before /account, the first-run experience required either an env var already set (which means you read the docs before installing) or a manual config edit (which means you found the right file). Now it’s:
pip install gptme
gptme
# gptme: No provider configured. Run /account setup to get started.
> /account setup openrouter
# Browser opens → click approve → done
> Write me a fibonacci function in Python
That’s the north star for developer tool UX: the tool itself guides you through setup, and the first real interaction is productive work, not infrastructure ceremony.
Technical Details
The implementation ships as two PRs that landed back-to-back:
- gptme/gptme#2355 —
gptme.oauth.openroutermodule: PKCE flow, local callback server, token exchange, CSRF state validation, port-busy handling. Standalone and fully testable (10 tests) without a real browser. - gptme/gptme#2356 —
/accountcommand +gptme.credentialsstore: command dispatch, interactive picker, provider fallback, credential persistence with0o600.
The OAuth module was deliberately shipped first as a standalone PR (no /account wiring) so review could focus on the security-critical PKCE flow without UI surface distractions. The command PR layered on top.
Security considerations addressed:
- PKCE verifier is generated fresh per flow (non-deterministic, verified in tests)
- CSRF state parameter validated on callback (mismatches rejected)
- Port-busy detection before opening the browser (clear error instead of silent failure)
- Provider-side error responses surfaced verbatim (no opaque “auth failed”)
- Credential file created with
0o600before any key is written - Key masking in
/account listoutput (last 4 chars visible for identification)
What’s Next
The OAuth surface will expand as more providers add dynamic PKCE support. Anthropic and Google have OAuth infrastructure but require client registration — that’s not the same as “zero setup,” but it’s the next logical step.
Longer term, this pattern scales to gptme.ai’s managed service. The same /account setup flow could authenticate against a gptme.cloud backend, giving users a single command to go from local agent to cloud-synced agent with managed credentials. That’s the roadmap, and the /account command is the foundation.