SPSA
Home * Programming * Algorithms * SPSA
SPSA, (Simultaneous Perturbation Stochastic Approximation)
a stochastic approximation algorithm devised in the late 80s [1] and 90s by James C. Spall [2]. It is an extension of the Finite Difference Stochastic Approximation (FDSA) algorithm aka Kiefer-Wolfowitz algorithm introduced in 1952 by Jack Kiefer and Jacob Wolfowitz [3], on the other hand motivated by the publication of the Robbins-Monro algorithm in 1951 [4].
The SPSA algorithm is suited for high-dimensional optimization problems giving an objective function of a p-dimensional vector of adjustable weights, Theta or Θ, using a gradient approximation that requires only N+1 or 2N objective function measurements over all N iterations regardless of the dimension of the optimization problem - opposed to FDSA, which needs p + 1 objective function measurements or simulations per step. At each iteration, a simultaneous perturbation vector with mutually independent zero-mean random variables is generated, a good choice for each delta is the Rademacher distribution with probability ½ of being either +1 or -1. Two feature vectors Θ+ and Θ- are calculated by adding and subtracting the delta vector scaled by gain sequence ck to/from the current feature vector Θ, to compare their objective function measurements. Dependent on the outcome and scaled by gain sequences ak and ck, the current feature vector is approximated accordantly. The gain sequences decrease with increasing iterations, converging to 0. The theory pertains to both local optimization and global optimization in the face of multiple local optima [5].
Contents
Automated Tuning
α = 0.602; γ = 0.101; for (k=0; k < N; k++) { ak = a / (k + 1 + A)^α; ck = c / (k + 1)^γ; for each p Δp = 2 * round ( rand() / (RAND_MAX + 1.0) ) - 1.0; Θ+ = Θ + ck*Δ; Θ- = Θ - ck*Δ; Θ += ak * match(Θ+, Θ-) / (ck*Δ); }
In computer chess or games, where the objective function reflects the playing strength to maximize, SPSA can be used in automated tuning of evaluation parameters as well as search parameters. A prominent SPSA instance is devised from Stockfish's tuning method as introduced by Joona Kiiski in 2011 [6], where the objective function is measured once per iteration by playing a pair of games with Θ+ versus Θ-, the function "match" returning a ±2 range, see pseudo code. The selection of the coefficients A, a, c, α and γ determine the initial values and time decay of the gain sequences ak and ck, is critical to the performance of SPSA. Spall recommends using α = 0.602 and γ = 0.101, which are the lowest possible values which theoretically guarantees convergence, further see the practical suggestions in Spall's 1998 SPSA implementation paper [7].
RSPSA
Already at the Advances in Computer Games 11 conference at Academia Sinica, Taipei, Taiwan in 2005, Levente Kocsis, Csaba Szepesvári, and Mark Winands introduced SPSA for the game programming community and discussed several methods that can be used to enhance its performance, including the use of common random numbers and antithetic variates, a combination of SPSA with RPROP (resilient backpropagation), and the reuse of samples of previous performance evaluations. RPROP, though it was originally worked out for the training of neural networks, is applicable to any optimization task where the gradient can be computed or approximated [8]. The described RSPSA (Resilient SPSA) was successfully applied in parameter tuning in the domains of Poker and Lines of Action [9].
See also
Selected Publications
1987 ...
- James C. Spall (1987). A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates. Proceedings of the American Control Conference, Minneapolis, MN, pdf reprint
1990 ...
- James C. Spall (1992). Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation. IEEE Transactions on Automatic Control, Vol. 37, No. 3, pdf
- James C. Spall (1997). A one-measurement form of SPSA. Automatica, Vol. 33, No. 1, pdf
- Payman Sadegh (1997). Constrained Optimization via Stochastic Approximation with a Simultaneous Perturbation Gradient Approximation. Automatica, Vol. 33, No. 5, pdf
- James C. Spall (1998). An Overview of the Simultaneous Perturbation Method for Efficient Optimization. Johns Hopkins APL Technical Digest, Vol. 19, No. 4, pdf
- James C. Spall (1998). Implementation of the Simultaneous Perturbation Algorithm for Stochastic Optimization. IEEE Transactions on Aerospace and Electronic Systems, Vol. 34, No. 3, pdf
- James C. Spall (1999). Stochastic Optimization: Stochastic Approximation and Simulated Annealing. in John G. Webster (ed.) (1999). Encyclopedia of Electrical and Electronics Engineering, Vol. 20, John Wiley & Sons, pdf
2000 ...
- James C. Spall (2000). Adaptive Stochastic Approximation by the Simultaneous Perturbation Method. IEEE Transactions on Automatic Control, Vol. 45, No. 10, pdf
- László Gerencsé, Stacy D. Hill, Zsuzsanna Vágó, Zoltán Vincze (2004). Discrete optimization, SPSA and Markov Chain Monte Carlo methods. Proceeding of the 2004 American Control Conference, pdf
- Stacy D. Hill (2005). Discrete Stochastic Approximation with Application to Resource Allocation. Johns Hopkins APL Technical Digest, Vol. 26, No. 1, pdf
- Levente Kocsis, Csaba Szepesvári, Mark Winands (2005). RSPSA: Enhanced Parameter Optimization in Games. Advances in Computer Games 11, pdf, pdf
- Levente Kocsis, Csaba Szepesvári (2006). Universal Parameter Optimisation in Games Based on SPSA. Machine Learning, Special Issue on Machine Learning and Games, Vol. 63, No. 3, pdf
- Mohammed Shahid Abdulla, Shalabh Bhatnagar (2007). Reinforcement Learning Based Algorithms for Average Cost Markov Decision Processes. Discrete Event Dynamic Systems, Vol.17, No.1
- John L. Maryak, Daniel C. Chin (2008). Global Random Optimization by Simultaneous Perturbation Stochastic Approximation. IEEE Transactions on Automatic Control, Vol. 53, No. 3, pdf
- Qing Song, James C. Spall, Yeng Chai Soh, Jie Ni (2008). Robust Neural Network Tracking Controller Using Simultaneous Perturbation Stochastic Approximation. IEEE Transactions on Neural Networks, Vol. 19, No. 5, 2003 pdf » Neural Networks
2010 ...
- Shalabh Bhatnagar, H.L. Prasad, L.A. Prashanth (2013). Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods. Lecture Notes in Control and Information Sciences, Vol. 434, Springer
- Qi Wang (2013). Optimization with Discrete Simultaneous Perturbation Stochastic Approximation Using Noisy Loss Function Measurements. Ph.D. thesis, Johns Hopkins University, advisor James C. Spall
- Pushpendre Rastogi, Jingyi Zhu, James C. Spall (2016). Efficient implementation of Enhanced Adaptive Simultaneous Perturbation Algorithms. CISS 2016, pdf
Forum Posts
2010 ...
- Stockfish's tuning method by Joona Kiiski, CCC, October 07, 2011 » Stockfish's Tuning Method
- Re: Stockfish's tuning method by Rémi Coulom, CCC, October 07, 2011
- Tuning again by Ed Schroder, CCC, November 01, 2011
- Goodbye CLOP, hello SPSA by Gary Linscott, FishCooking, May 17, 2014 » CLOP
- Re: Eval tuning - any open source engines with GA or PBIL? by Jon Dart, CCC, December 06, 2014 » PBIL [10]
2015 ...
- Re: A plea to someone by Lyudmil Antonov, CCC, April 07, 2015
- Re: A plea to someone by Jon Dart, CCC, April 08, 2015
- Automatic Criterion for stopping SPSA? by tsa..., FishCooking, May 29, 2015
- Re: SPSA Tuner by Lyudmil Antonov, FishCooking, July 20, 2015
- Too small number of games in SPSA by Lyudmil Antonov, FishCooking, April 22, 2016
- Self-correcting SPSA tuner for chess engines Ivan Ivec, FishCooking, January 04, 2017
- SPSA problems by Ralf Müller, CCC, April 02, 2017
- SPSA and search.cpp? by Nick Pelling, FishCooking, January 06, 2019 » Stockfish
2020 ...
- A hybrid of SPSA and local optimization by Niels Abildskov, CCC, June 01, 2021 » Texel's Tuning Method
External Links
- SPSA Algorithm by James C. Spall
- Simultaneous perturbation stochastic approximation - Wikipedia
- SPSA Tuner for Stockfish Chess Engine by Joona Kiiski » Stockfish
- GitHub - lantonov/spsa: Modifications of SPSA by Lyudmil Antonov » Stockfish
- GitHub - jgomezdans/spsa: Simultaneous perturbation stochastic approximation Python code
- Simultaneous Perturbation Stochastic Approximation code in python · GitHub
- NIPS 2012 Tutoral - Stochastic Search and Optimization, Video Lecture by James C. Spall
- hr-Bigband feat. Richard Bona - Kalabancoro, October, 2019, Hessischer Rundfunk, YouTube Video
References
- ↑ James C. Spall (1987). A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates. Proceedings of the American Control Conference, Minneapolis, MN, pdf reprint
- ↑ James C. Spall (1992). Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation. IEEE Transactions on Automatic Control, Vol. 37, No. 3
- ↑ Jack Kiefer, Jacob Wolfowitz (1952). Stochastic Estimation of the Maximum of a Regression Function. The Annals of Mathematical Statistics, Vol. 23, No. 3
- ↑ Herbert Robbins, Sutton Monro (1951). A Stochastic Approximation Method. The Annals of Mathematical Statistics, Vol. 22, No. 3
- ↑ John L. Maryak, Daniel C. Chin (2008). Global Random Optimization by Simultaneous Perturbation Stochastic Approximation. IEEE Transactions on Automatic Control, Vol. 53, No. 3, pdf
- ↑ Stockfish's tuning method by Joona Kiiski, CCC, October 07, 2011
- ↑ James C. Spall (1998). Implementation of the Simultaneous Perturbation Algorithm for Stochastic Optimization. IEEE Transactions on Aerospace and Electronic Systems, Vol. 34, No. 3, pdf
- ↑ Martin Riedmiller, Heinrich Braun (1993). A direct adaptive method for faster backpropagation learning: The RPROP algorithm. IEEE International Conference On Neural Networks, pdf
- ↑ Levente Kocsis, Csaba Szepesvári, Mark Winands (2005). RSPSA: Enhanced Parameter Optimization in Games. Advances in Computer Games 11, pdf, pdf
- ↑ NOMAD - A blackbox optimization software