pref0 + LlamaIndex

Combine preference learning with LlamaIndex's retrieval. pref0 handles user preferences while LlamaIndex handles knowledge retrieval.

Quick start

python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core.chat_engine import CondensePlusContextChatEngine
import requests

PREF0_API = "https://api.pref0.com"
PREF0_KEY = "pref0_sk_..."

def get_preferences(user_id: str) -> str:
    res = requests.get(
        f"{PREF0_API}/v1/profiles/{user_id}",
        headers={"Authorization": f"Bearer {PREF0_KEY}"},
    )
    prefs = res.json().get("preferences", [])
    return "\n".join(
        f"- {p['key']}: {p['value']}"
        for p in prefs if p["confidence"] >= 0.5
    )

# LlamaIndex handles knowledge retrieval
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)

# pref0 handles user preferences
learned = get_preferences("user_abc123")

chat_engine = index.as_chat_engine(
    system_prompt=f"You are a helpful assistant.\n\nLearned user preferences:\n{learned}",
)

response = chat_engine.chat("How do I deploy this project?")

Why use pref0 with LlamaIndex

Complementary to RAG

LlamaIndex retrieves knowledge. pref0 retrieves preferences. Together they build a complete context.

Works with any index

Vector stores, knowledge graphs, SQL — pref0 works alongside any LlamaIndex index type.

Chat engine compatible

Inject preferences into system prompts for chat engines, query engines, or custom pipelines.

Separate concerns

Keep knowledge retrieval and preference learning as independent systems. Simpler to debug and maintain.

Other integrations

Add preference learning to LlamaIndex

Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.