Preference learning for AI agents

Your agent should
learn preferences

Users correct their agents every day, then the session ends and it's forgotten. pref0 makes those lessons stick.

First 100 requests/mo free

watch it learn

Session #7
Monday
Session #12
Thursday
Session #15
Next Monday

Same preference, different sessions. Confidence compounds until the agent just knows.

Every correction is a learning signal

Real signals pref0 extracts and compounds across conversations.

"Use TypeScript, not JavaScript"

language: typescript0.70

"Deploy to Vercel, not Netlify"

deploy_target: vercel0.70

"Use pnpm instead of npm"

package_manager: pnpm0.70

"Bullet points, not paragraphs"

response_format: bullet_points0.70

"Keep it under 5 lines"

response_length: concise0.40

"Use Postgres, not MySQL"

database: postgres0.70

Each preference starts with a confidence score. Repeat it across different conversations and it becomes a strong learned preference.

How it works

Three API calls. The learning happens automatically.

1. Send conversations

Pass chat history after each session. pref0 extracts corrections and preferences automatically.

2. Preferences compound

Same preference across sessions? Confidence goes up. The profile gets sharper over time.

3. Inject at inference

Fetch learned preferences before your agent responds. It behaves like it already knows the user.

1. Track a conversation

await fetch("https://api.pref0.com/v1/track", {
  method: "POST",
  headers: {
    Authorization: "Bearer pref0_sk_...",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    userId: "user_abc",
    messages: conversation.messages,
  }),
});

2. Fetch preferences at inference

const res = await fetch(
  "https://api.pref0.com/v1/profiles/user_abc",
  { headers: { Authorization: "Bearer pref0_sk_..." } }
);
const { prompt } = await res.json();
// → "Prefers TypeScript, pnpm, Tailwind..."
0 free

requests every month

0 sessions

to reach high confidence

0 endpoints

to integrate with any agent

Not memory. Not RAG. Preference learning.

Memory stores logs. RAG retrieves documents. pref0 extracts structured preferences from corrections, compounds confidence over time, and serves them at inference.

MemoryRAGpref0
StoresRaw conversationsDocumentsStructured preferences
Learns over timeNoNoYes, confidence compounds
Handles correctionsNoNoCore signal
IntegrationVariesVector DB + retriever2 endpoints
ScopingPer userPer collectionUser → Team → Org

Stop re-correcting. Start learning.

Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.

Get Started Free

100 requests/mo free · $0.005 per request after