Direct Preference Optimization Formatted: subjective_response_DPO_sample.json
Customizable Format for RLHF: subjective_response_RLHF_data.json
Us Folks is building a civic-tech platform that captures nuanced political values and tests how AGI might advocate for citizens. While our MVP for gathering novel subjective preference data is still a few weeks away, we currently have specialized pipelines to process and structure historical public-opinion data into anonymized, respondent-level datasets suitable for RLHF training.
Us Folks processes respondent-level microdata from political, public-sentiment, or consumer surveys into RLHF datasets. We also transform crosstabs-only data into statistically representative synthetic microdata, unlocking huge quantities of suitable data—especially election exit polls and leading-firm political polling, recent and historical.
For each response we provide:
We generate structured (profile → prompt → subjective response) tuples. We can create millions of tuples per week, drawing on multi-decade polling archives with open data-use licenses.
Launching soon...
An AI-driven platform empowering you to reshape government — at all levels, from local to national.