RAG retrieves documents to augment LLM responses. pref0 retrieves learned preferences. They solve different problems and work well together.
| pref0 | RAG | |
|---|---|---|
| What it retrieves | Structured user preferences | Documents and text chunks |
| Data source | Extracted from conversations | Pre-indexed document corpus |
| Personalization | Per-user preference profiles | Same documents for all users |
| Learns over time | Yes — confidence compounds | No — documents are static |
| Infrastructure | Hosted API, no vector DB | Vector database + embedding pipeline |
| Best for | How the user wants things done | What the agent needs to know |
RAG provides knowledge — facts, documentation, data the LLM doesn't have. pref0 provides preferences — how the user wants the LLM to behave. These are orthogonal concerns.
RAG typically retrieves the same documents regardless of who's asking. pref0 retrieves a different preference profile for each user. Both are injected into the prompt, but they serve different purposes.
RAG documents are indexed once and updated manually. pref0 preferences learn and compound automatically from every conversation. The agent gets more personalized over time without manual intervention.
They solve different problems. RAG gives your agent knowledge. pref0 teaches your agent preferences. Most production agents benefit from both — RAG for what to say, pref0 for how to say it.
Yes. Inject pref0 preferences into the system prompt alongside your RAG context. The agent gets both knowledge and personalization.
No. pref0 returns a user's full preference profile directly. No embedding, no vector DB, no similarity search. It's a simpler architecture.
Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.