Computer Chess Club Archives


Search

Terms

Messages

Subject: Some crazy ideas

Author: Gareth McCaughan

Date: 08:11:08 03/29/99


Why are the following ideas crazy? (Perhaps they aren't; but, so far as
I know, no one does them, so there's probably a reason. Or maybe everyone's
doing them, and I'm just ignorant.)

1. Unifying q-search, singular extensions and selective search.

Suppose the recursive search routine takes a "depth" argument measured
in units of 1/16 ply or something of the kind. Singular extensions say
things like "Add 1/2 ply to the search depth if the move is of kind X",
so this is being done already. But: why not take it further? Add 1 ply
to the depth for every capture; that effectively does quiescence search.
(But maybe the cost of doing this is unreasonably high? I just tried
doing that to TSCP (because it's so easy to make changes to), and it
looks like maybe it is.)

Possibly more to the point: if the amount we change the "depth" by for
each successive move is smaller for moves reckoned "better" by earlier
iterations, don't we effectively get a more graceful variety of forward
pruning, with more promising moves being investigated more deeply? And
this *without* the problem of apparently-weaker moves simply not being
looked at at all. This seems like a terribly, terribly obvious idea;
but does anyone actually do it?

2. A very extreme tradeoff between width and depth.

(I apologise for the excess of notation in what follows. I haven't been able
to find a way of expressing it that's clearer.)

Write E for your "flat" (leaf-node) evaluator, and E(d) for minimax
evaluation to depth d. So far, so familiar. Now, consider a new evaluator
E_m(d). It evaluates a position by playing m moves in it according to E(d)
and then evaluating the resulting position with E(d). (So, for instance,
E_1(d) is the same thing as E(d+1).) Finally, consider the evaluator
[E_m(d2)](d1). In other words, do minimax to depth d1, but evaluate the
leaves with E_m(d2). This is like doing ordinary minimax to depth d1+d2,
but prolonging the analysis by m moves "in the middle". When m=0, of course,
this is the same thing as E(d1+d2), the usual minimax evaluator.

Increasing d1 or d2 by 1 costs, say, a factor of 3 in time. For that price,
we can increase m by a factor of 3.

It seems to me likely that at some point increasing m becomes more valuable
than increasing d; and, furthermore, that for m of reasonable size, one can
get information from this sort of evaluation that is pretty inaccessible to
the usual plain-minimax evaluator.

For instance, decent king-safety evaluation. (There must be plenty of positions
in which existing programs are perfectly capable of playing a kingside attack
without needing too long per move, but can't tell at the outset that it will
work.) Or weaknesses in the pawn structure. (There must be plenty of positions
in which existing programs are perfectly capable of piling up attackers on a
weakness, but can't tell at the outset how much damage this will do to the
opponent's position.)

This also seems nearer to the way in which humans think about positions
than what is done at present; but maybe that's a bad thing rather than a
good thing...



This page took 0.01 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.