Author: Robert Hyatt
Date: 07:01:20 01/13/00
Go up one level in this thread
On January 13, 2000 at 07:24:09, Vincent Diepeveen wrote: >On January 12, 2000 at 20:52:53, James Robertson wrote: > >>These two issues appreared in some threads a way down, and reminded me that I >>might want to try them. > >checks in qsearch certainly are important, but it's very hard to write >code such that you do only good checks and in that way limit it. > >>Could someone who is using/was using them explain how they affected his program? > >checks in qsearch are very important for DIEP, as without them the last >few plies where i nullmove are more or less not taking into account the >biggest thread of them all (possible mate). > >>Also, what is the exact definition of "SE"? I have several conflicting ideas of >>what they might/must be, and all are probably wrong. Could someone please >>explain the method and advantages? > >In fact something is a singular extension if a move in a position p is >better than all other moves in position p by a margin S. For deep blue >this margin is s = 38/128 pawn. > >Now if we take into account that our window is [alfa,beta], >then we can see that we can get basically 3 different types of singular >extensions: > singular move >= beta > singular move > alfa > singular move <= alfa > This last one is not doable. Because there are no accurate scores. You can detect singularity perfectly if score(singular move) > alpha AND score(singular move) < beta. You can detect singularity less accurately if score (singular move) > beta, because you have to test the other moves with a reduced depth, or else suffer through a pure minimax search. There is no known definition for singularity when score(singular move) <= alpha. >Usually most captures are not considered in singular extensions. In >general captures are not smart to extend, except some do recaptures. > >The easiest implementable form of singular moves are PV singular moves. >so best move is > alfa. You search all other moves anyway, so you now only >need to search them a second time with X=score(best move)-S ==> [X,X+1] > usually you don't search them a second time. And you almost never search them _all_ a second time. With decent move ordering, the first move is best > 90% of the time, so that no moves get researched while proving this move to be singular. >If you get back X or below for that search then the move is singular. >In fact you only needed 1 window research here to figure out whether something >is alfa singular. new term??? > >If you take a few problems and look to what moves from human perspective >and also from evaluation perspective are always giving problems to >find a few problems then we clearly see that in a given position x there is >a move >= beta and all other moves are < beta. This is called "fail-high singular" in the JICCA paper. > >Now if we find such a move we must of course figure out whether it's >singular with margin S, or you can simply say: "i always extend this move >and put a flag in hashtable so that next times if i refer to this move >it gets also extended". > >The trick to not search like minimax here is that you search all other moves >than the best moves with a smaller depth. This trick is already known of >course from nullmove. Though i don't want to risk a big fight here >what was invented sooner in programs, i'm sure that this idea already existed >in implementations for nullmoves, but was not described yet, so probably >this was invented by the chiptest members themselves. not sure what you mean. null-move search was _definitely_ being done for years prior to the development of singular extensions. > >Yet more or less you're searching *all* moves at a beta singular move, >and you need more than one research too, as a beta value can be any >value from beta to mate in xx. So best is to research the beta move >with a window [beta,beta+S]. Now if it is above beta+S, then you can >directly extend it as singular, otherwise you have a true score T for >beta and need another research: [T-S,T] to see whether your beta fail high >is more than a margin S singular. > >Lots of researches, despite that it's all with reduced depth! > >Now we go to the hardest problem: a move <= alfa. The problem is the same >as for a move >= beta. You simply don't know the true value of it. >Now you can say of course: "i need to search them anyway to the full >depth", but that's with the current window [alfa,beta]. this simply isn't doable. And deep thought/deep blue didn't even attempt to do it for obvious technical reasons. > >Now you do a shorter search but with a window [-infinite,alfa]. >If you want to be clever and just take the best move from the list and >use that value, then you're smoked, as it can be a nullmove based value; >if a move m with score s < alfa is seen as best at this position, >then it's very well possible that it's by far not the best move, >as this score can be a nullmove or quiescencesearch based score, >so in fact it was actually s-i where i is quite huge. Now other moves >suddenly make a good chance to be better than s-i. The only thing you >know when starting to wondering how to figure out whether a move >in the list is singular with a margin S, is to search >with a window: [-infinite,s] as you know pretty sure that all moves are >below s. Yet even with a reduced search depth, this overhead is huge. this (a) doesn't work and (b) wastes a huge number of nodes doing so. To detect singularity at fail-low nodes requires a minimax search, and trying to detect this seems pointless anyway as all moves are bad... > >More or less you want to have truebounds for moves before starting >a singular search. PV singular moves are relatively easy to do, but >the most interesting moves are obviously beta singular moves, which are >most expensive to research as otherwise you would have gotten a cheap >cutoff at once, now you are busy searching and researching other moves. > >Ok now from branching factor viewpoint. Let's assume that finding betasingular >moves is done using a reduction factor of 2. So other moves you search >with a reduced search depth of 2. > >Now i know that Insomniac is having a pretty good branching factor. >Let's say your branching factor is 2.5 to 3 at the bigger depths? > >I guess you never measured what the average number of semi legal moves >is. Well i did measure it. The nonsense the average program searches >gets to positions with about 40 moves. > >So you need at least 39x of depth d-2. >Now the normal search for the beta singular move is at depth d. >Branching factor is 2.5 to 3, let's use optimistically 2.5 as >branching factor. This means that for the beta singular move to get the >cutoff you did 2 ply extra effort, so 2.5 * 2.5 * x = 6.25 x > >HOWEVER the number of nodes seen (not even counting the beta >move research which is also expensive) was already 39x. > >So your overhead is AT LEAST 39x / 6.25x = 6.24 which is nearly 2 ply. > >So just testing at beta singular moves loses your program at least 2 ply >of search depth. this is probably too high. The version of Crafty with SE loses about a ply over the one without. I believe the deep thought paper reported the same sort of loss. The reason is that most moves are not singular, and you rarely search all 39 moves to prove that the move is singular, because on the offset research, as soon as one move fails high, you test it on the original window, and it fails low, so you stop right there, since there is obviously no singular move. > >>Thanks, >>James
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.