Research agents gather and synthesize information. pref0 learns how each user wants results structured — depth, format, sources, and focus areas.
Real corrections pref0 extracts and compounds across conversations.
"Give me bullet points, not paragraphs"
format: bullet_points0.70"Always cite sources with links"
citations: always_with_links0.70"Focus on recent papers, last 2 years"
recency: last_2_years0.70"Include competing viewpoints"
perspective: balanced0.40"Start with the TL;DR"
structure: tldr_first0.70"Skip the methodology, just give me the findings"
depth: findings_only0.40Research the latest approaches to LLM fine-tuning
Large language model fine-tuning has evolved significantly in recent years. The field encompasses several key methodologies including full fine-tuning, LoRA, QLoRA, and RLHF...
Too long. Give me bullet points with paper links. Start with the key takeaway.
pref0 extracts the correction from this conversation and adds it to the user's preference profile. Next time, the agent gets it right.
Bullet points vs. prose, TL;DR first vs. detailed analysis — the agent delivers in the right format.
Academic papers, blog posts, documentation — pref0 learns what sources the user trusts.
Some users want deep dives, others want summaries. The agent adapts automatically.
Research style preferences apply across topics. Once learned, every research task is delivered right.
Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.