Difference between revisions of "Search Progression"

From Chessprogramming wiki
Jump to: navigation, search
(link to iir)
(Improved Connorpasta: link to conthist)
Line 33: Line 33:
 
* QS SEE pruning
 
* QS SEE pruning
 
* PVS SEE pruning (captures and quiets)
 
* PVS SEE pruning (captures and quiets)
* [[Continuation History]] (CMH + FMH etc..)
+
* [[History Heuristic#Continuation History|Continuation History]] ([[History Heuristic#Counter Moves History|CMH]] + FMH etc..)
 
* [[Capture History|Capture History Heuristic]]
 
* [[Capture History|Capture History Heuristic]]
 
* [[History Pruning]]
 
* [[History Pruning]]

Revision as of 06:28, 8 July 2024

When creating a new engine from scratch, it is often perplexing to choose which feature to implement first. Written below are some advice from authors of top chess engines.

Connorpasta

A message sent by Seer author Connor McMonigle in OpenBench discord. It got the name "Connorpasta" due to its prevalence of use in computer chess Discord servers.

A reasonable search feature progression (starting with vanilla TT (sorting TT move first), PVS, QS and aspiration windows which are all pretty fundamental) imo is: NMP, LMR (log formula is most principled ~ there are a number of adjustments you can experiment with), static NMP (aka RFP), 
butterfly history heuristic, LMP, futility pruning, CMH+FMH, QS SEE pruning, PVS SEE pruning (captures and quiets), QS delta pruning, history pruning, capture history heuristic, singular extensions, multicut (using singular search result).
(with a healthy amount of parameter tweaking after each addition)
Idk if I'm missing anything major. Those search heuristics constitute the vast majority of the Elo you'll find in any top engine, though the details of the implementation are very important.

Improved Connorpasta

Written by Integral author Aron Petkovski. It is an improvement to Connorpasta that aims to include more features and covers a wider range of topics.

A reasonable search feature progression assuming you have the fundamentals i.e. negamax and alpha/beta pruning (ideally in a fail-soft framework)

There are also time management adjustments that can be done at any point after adding iterative deepening. Ideally you have:

  • Hard bound (applies to the entire search)
  • Soft bound (checked on each new depth in the ID loop)

For the soft bound the progression can go something like this

  • Node-based scaling
  • Best move stability
  • Eval stability

Additionally, should be a healthy amount of parameter tweaking after each addition. There are other minor features that top engines have, but these will constitute the majority of the elo you will find in them.