Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: double hash tables

Author: Komputer Korner

Date: 18:05:39 10/07/97

Go up one level in this thread



On October 01, 1997 at 09:32:00, Chris Whittington wrote:

>
>On October 01, 1997 at 08:50:17, Robert Hyatt wrote:
>
>>On October 01, 1997 at 06:32:46, Chris Whittington wrote:
>>
>>>
>>>CSTal always used to have one depth-based hash table with five bucket
>>>attempts on clashes. After reading the threads in rgcc, I tried it out
>>>with two hash table, an always replace and a depth-priority table.
>>>
>>>Some test positions get faster, some get slower, but, on the whole,
>>>there's an improvement.
>>>
>>>Until, that is, the hash gets overloaded, when the old depth-only table
>>>starts to come out on top.
>>>
>>>Is this a generally observed phenomenon ?
>>>
>>>Chris
>>
>>yes.  I have two hash algorithms for Crafty.  I use Ken's two-tiered
>>approach
>>normally, because it is very fast execution-wise, and simple to
>>understand.  I
>>include this version in the "released" source code, and use this version
>>on ICC
>>exclusively.
>
>Can we assume from this that there are two versions of Crafty, one that
>gets released, and one that doesn't ?
>
>>
>>The other approach, which goes by multiple names depending on exactly
>>how it
>>is implemented (rehashing as I used in Cray Blitz, which is likely the
>>best of
>>the best approaches), buckets, and chains, all offer advantages when the
>>table
>>becomes saturated.  Don Beal tested this in an ICCA article and found
>>exactly
>>the same thing.
>>
>>The only saving grace for me is that I have not hashed q-search nodes
>>for way
>>over a year.  When I compared hashing in the q-search with hashing the
>>normal
>>search only, I found it to be a few percent faster.  The tree got a
>>little
>>bigger, but the search speeded up enough to offset this and even give me
>>a
>>couple of percent more in speed.  A side-effect of this is that since I
>>don't
>>hash q-search nodes, my hash table is much less stressed than the
>>programs that
>>do.  I'd guess that for middlegame positions, at least 50% of the nodes
>>would
>>be classified as "leaf" or "quiescence".  And they never make it into
>>the table.
>>
>>So I get by with smaller tables.

So does this mean that having a 2nd hashing algorithm kicking in when
the hash table gets overloaded is the optimum for now?




This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.