Using this site
- Selections. View predicted horses for a chosen date. Pick Date, Model (Consensus or any single model), and optionally Course(s) or Country. The list updates when you change any filter. The starred horse (ā) is the main tip per race; others are ranked by the model. Use the āGeneratedā date to see which day the selections were produced for.
- Results. After races have been run, pick a Date (from the dropdown of dates we have results for), then optionally Model and Course(s). Youāll see win/place/loss, stakes, returns, and P/L for that dayās tips.
- Statistics. See performance over time. Choose Time period (e.g. last 7, 30, 90 days or all time), then optionally filter by Model and Course. The table shows strike rate, ROI, average odds, wins, places, number of bets, and profit per course and model.
The models
We use four prediction models, plus a combined Consensus:
- Consensus (All Models). Combines all four. Each horse gets a YetiScore from model agreement (how many had it 1st, 2nd, 3rd), average probability and position weighting. The top horse per race is the main tip (ā).
- Recent Global. One model trained on the last 5 years of data from all courses.
- Recent Course-Specific. Separate models per course, each trained on the last 5 years for that course.
- Complete Global. One model trained on all historical data (no 5-year limit).
- Complete Course-Specific. Separate models per course, each trained on all historical data for that course.
All models use the same āback to basicsā features: age, sex, weight, draw, course, distance, going, runs, official rating (OR), RPR, TS and sire. They predict win probability and rank horses; we turn that into tips and (for Consensus) YetiScore.
The pipeline
Selections and results are produced by an automated pipeline (run separately from this site). In order:
- Data. Past results are downloaded (e.g. from Racing Post) and imported into the database. Training data is exported as ārecentā (5 years) and ācompleteā (all time) CSVs.
- Training. All four model variants are trained (or retrained when new results are available): Recent Global, Recent Course-Specific, Complete Global, Complete Course-Specific.
- Racecards. For the target day, racecards (runners, weights, going, etc.) are downloaded and converted to a format the models can read.
- Predictions. Each of the four models predicts win probability for every runner. These predictions are written to the selections database.
- Consensus. From the four model outputs, consensus selections are computed (YetiScore, top N per race) and also stored in the database.
- Recording & results. Selections are stored for the dashboard. After races run, results are updated (finish positions, settlement), and stats (e.g. courseāmodel performance) are regenerated so Results and Statistics stay up to date.
This site only displays data from that pipeline. It does not run the pipeline; that is done by scripts (e.g. daily automation) on the server.
YetiScore (Consensus)
For Consensus view, each horse gets a score out of 100. It combines:
- Position votes: More points if models ranked the horse 1st (10 pts), 2nd (4), 3rd (2), or 4th+ (1).
- Average probability: The modelsā average win probability for that horse.
- Agreement: A small bonus when more models agree (up to all four).
Horses are sorted by YetiScore within each race. The top one is the main tip; the āModel votesā text shows which models had it 1st, 2nd, or 3rd.