Research agents should learn how you want information presented

Research agents gather and synthesize information. pref0 learns how each user wants results structured — depth, format, sources, and focus areas.

What pref0 learns

Real corrections pref0 extracts and compounds across conversations.

"Give me bullet points, not paragraphs"

format: bullet_points0.70

"Always cite sources with links"

citations: always_with_links0.70

"Focus on recent papers, last 2 years"

recency: last_2_years0.70

"Include competing viewpoints"

perspective: balanced0.40

"Start with the TL;DR"

structure: tldr_first0.70

"Skip the methodology, just give me the findings"

depth: findings_only0.40

Example conversation

user

Research the latest approaches to LLM fine-tuning

assistant

Large language model fine-tuning has evolved significantly in recent years. The field encompasses several key methodologies including full fine-tuning, LoRA, QLoRA, and RLHF...

user

Too long. Give me bullet points with paper links. Start with the key takeaway.

pref0 extracts the correction from this conversation and adds it to the user's preference profile. Next time, the agent gets it right.

Benefits

Format preferences

Bullet points vs. prose, TL;DR first vs. detailed analysis — the agent delivers in the right format.

Source preferences

Academic papers, blog posts, documentation — pref0 learns what sources the user trusts.

Depth control

Some users want deep dives, others want summaries. The agent adapts automatically.

Cross-topic consistency

Research style preferences apply across topics. Once learned, every research task is delivered right.

Other use cases

Stop re-correcting. Start learning.

Your users are already teaching your agent what they want. pref0 makes sure the lesson sticks.