Author: Robert Hyatt
Date: 08:00:18 12/04/05
Go up one level in this thread
On December 04, 2005 at 03:30:25, J. Wesley Cleveland wrote: >On December 03, 2005 at 23:35:56, Robert Hyatt wrote: > >>On December 03, 2005 at 17:11:32, chandler yergin wrote: >> >>>On December 03, 2005 at 12:47:07, Robert Hyatt wrote: >>> >>>>On December 03, 2005 at 09:48:12, Matthew Hull wrote: >>>> >>>>>On December 02, 2005 at 23:27:41, Robert Hyatt wrote: >>>>> >>>>>>On December 02, 2005 at 17:47:00, Tony Nichols wrote: >>>>>> >>>>>>>On December 02, 2005 at 17:21:59, Robert Hyatt wrote: >>>>>>> >>>>>>>>It is time to stop this now. The above is utter nonsense. We don't "search" >>>>>>>>hash tables. Larger hash tables do not take longer to search, because we just >>>>>>>>don't search them. We randomly probe into them and either hit or miss, so the >>>>>>>>size has absolutely no effect other than larger sizes hold more information >>>>>>>>without requiring that older data be overwritten sooner. >>>>>>>> >>>>>>>>You are quoting nonsense... >>>>>>> >>>>>>> >>>>>>>Hello, >>>>>>> >>>>>>> Is it safe to assume that you can't have too much hash? I mean, as long as you >>>>>>>have the ram. >>>>>>>Regards >>>>>>>Tony >>>>>> >>>>>> >>>>>>pretty much. Beyond some point additional hash will not help. But to see how >>>>>>it helps, set it to something like 384K (yes 384 k bytes) and run a position for >>>>>>say 10 minutes. Record the highest depth reached and the time to reach that >>>>>>depth. Double the hash and re-run. Keep doing this until it doesn't get any >>>>>>faster. You just reached the max needed for the 10 minute search time (10 >>>>>>minutes was just a number, pick anything you want). You will see significant >>>>>>speed improvements at first, but they begin to flatten out and eventually >>>>>>doubling the hash doesn't change a thing any further. >>>>>> >>>>>>If a program clears hash between moves (most do not) then this can be a bigger >>>>>>issue with large hashes since they do take time to clear should that be >>>>>>needed... >>>>> >>>>> >>>>>Also, a very slight slowdown with a huge hash table can take effect if the >>>>>higher memory positions require addressing tricks to reach, which seems to be >>>>>especially true on i686 systems. At that point, the diminishing return of a >>>>>huge table is overtaken by the extra clock cycles needed for the high-memory >>>>>probe, resulting in a slightly perceptible performance hit. >>>> >>>>Yes it is possible that when total memory size goes beyond some value that we >>>>begin to see TLB thrashing, which adds extra memory accesses to each hash probe, >>>>to translate the virtual addresses to real. However, in general, bigger hash >>>>should always be better, up until you reach the point where there is very little >>>>overwriting, going beyond that might do nothing more than aggravating the TLB >>>>miss problem. >>>> >>>>I always run a series of positions with steadily increasing hash size to find >>>>the "sweet spot" beyond which performance doesn't get better or begins to drop >>>>off due to excessive TLB misses... >>>I think that's what some of us meant when the thread started. >>>That there is an Optimal amount of Hash based on available Ram of the Processor. >>>I really don't understand all the confusion. >> >> >>That isn't exactly what I said. The TLB problem is independent of total RAM on >>the computer. It is an internal associative memory device used to map virtual >>to real addresses, and has a fixed size for a given processor version. However, >>we are talking about adding about 2 extra memory references to a hash probe, >>which is not that significant. It won't cost 1% total time, for example... > >Actually, it can be >5%. I tried a experiment with crafty a while ago. I changed >HashStore to be a no-op, so there are no hash hits to affect the search tree. >Then I ran the same set of positions with different hash sizes and got more than >5% different run times between the smallest and the largest hash sizes. I suppose that is possible. But hashing adds things to the search as well and you apparently eliminated the + side completely. For example, the hash-move lets me do a search without generating any moves, which is a significant savings, and goes against that 5% figure you got. In fact, it will wipe that out with reasonable hash hit rates...
This page took 0 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.