Reasoned probabilities powered by collective intelligence
Usage & Benefits
Collect and aggregate the latent forecasting wisdom of your employees on a large scale:
Reliable probabilities
Insightful narratives
Enhance the accuracy, precision, and timeliness of your organization’s forecasts.
Vet the assumptions behind your model-based forecasts.
Encourage and verify the alignment of various cohorts.
Identify champion forecasters within your organization.
Train everyone to forecast better with objective scoring and feedback.
Service
License and deploy our full-featured online crowd-forecasting platform under your own brand.
Ask as many questions as you like, as often as you like, to as many forecasters as you can recruit within your community.
Hypermind provides hosting, technical support, and training in content administration.
Two main modes of forecast elicitation and aggregation
AI-powered curation and synthesis of forecast rationales reveals the story behind the quantitative crowd prediction
Deliverables
Full-featured crowd-forecasting platform, skinned to your brand
Cloud delivery or local installation with integration (SSO, etc.)
Rich variety of question types and reward schemes (see below)
AI-powered collection and curation of forecasting rationales
Customizable best-practice forecast-aggregation algorithms
Segmentation of the forecaster population into cohorts of your choosing, each with its own crowd forecast
Other features include:
Performance dashboards, leader boards, message boards
Bilingual user interface in English and French
Automatic keyword-based online news retrieval
Forecasting alarms with email notifications.
Available types of forecasting questions
Prescience lets you formulate your forecasting questions in many different ways.
This gives you the flexibility to ask exactly the questions that you want, and to get answers in the format that you need.
Binary
The correct answer can be yes or no.
E.g., “Will John Smith win the election?” Yes | No
Forecasters assign probabilities to each option.
Discrete Winner-Take-All
The correct answer can be one of several discrete outcomes.
E.g., “Who will win the election?” Smith | Okele | Steiner | Other
Forecasters assign probabilities to each option.
Ordered Winner-Take-All
The correct answer can be one of several contiguous intervals or options.
E.g., “When will the loser concede the election?” Nov | Dec | Maybe Later
Forecasters assign a probability distribution over all options.
Graded
The correct answer is a distribution over several options.
E.g., “What percentage of the vote will go to each candidate?”
Forecasters assign quantities to each option.
Continuous
The correct answer is a value in a range.
E.g., “How many million dollars will Smith’s campaign raise next quarter?”
Forecasters specify a probability distribution over a continuous range.
Forecast elicitation and scoring modes
Any of the question types may be combined with various elicitation and scoring modes, depending on your requirements for ground truth and forecasting horizon.
Fixed Ground Truth
Ground-truth forecasts are made relative to a fixed event horizon.
E.g., “Will Smith win the election in November?” Yes | No
Resolution and scoring occur when the event horizon is reached.
Rolling Ground Truth
Ground-truth forecasts are made relative to a fixed time window that rolls forward daily.
E.g., “30 days from now, will the election’s loser have conceded?” Yes | No
Scoring occurs on a daily basis as the resolution window rolls forward.
Drip Rewards
Our patent-pending reward scheme incentivizes long-term ground-truth forecasting using a series of small “drip” rewards along the way.
E.g., “In 10 years, will Scotland still be part of the United Kingdom?” Yes | No
Ground-truth forecasts are made relative to a fixed long-term event horizon.
Small “drip” rewards based on the current crowd forecast (used as proxy for ground truth) are distributed at random times throughout the forecasting period.
Bayesian Inferential Truth
Incentivizes long-term or deep-time forecasting, with or without ground truth, with immediate scoring and rewards.
Based on MIT’s Bayesian Truth Serum and Surprisingly Popular algorithms.
Subjective personal forecasts or estimates are elicited together with predictions of the average personal forecast or estimates of all participants.
E.g., “In 2100, what will be the average global temperature? Please give your estimate, and also predict the average estimate that others will give.”
Resolution and scoring occur as soon as all participants have stated their answers.
Scoring considers the objective accuracy of one’s prediction of others’ average estimate, as well as how surprisingly common one’s personal estimate is.