Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Chess program improvement project (copy at Winboard::Programming)

Author: Stuart Cracraft

Date: 11:25:07 03/08/06

Go up one level in this thread


On March 08, 2006 at 10:39:14, Robert Hyatt wrote:

>On March 07, 2006 at 01:42:56, Stuart Cracraft wrote:
>
>>On March 07, 2006 at 00:40:11, Dann Corbit wrote:
>>
>>>On March 07, 2006 at 00:32:55, Stuart Cracraft wrote:
>>>
>>>>On March 07, 2006 at 00:22:32, Dann Corbit wrote:
>>>>
>>>>>On March 07, 2006 at 00:05:59, Stuart Cracraft wrote:
>>>>>
>>>>>>On March 06, 2006 at 23:59:59, Dann Corbit wrote:
>>>>>>
>>>>>>>On March 06, 2006 at 23:55:17, Dann Corbit wrote:
>>>>>>>
>>>>>>>>On March 06, 2006 at 23:39:36, Stuart Cracraft wrote:
>>>>>>>>
>>>>>>>>>On March 06, 2006 at 22:47:18, Dann Corbit wrote:
>>>>>>>>>
>>>>>>>>>>On March 06, 2006 at 22:13:56, Stuart Cracraft wrote:
>>>>>>>>>>[snip]
>>>>>>>>>>>Discouraging Dan. Discouraging.
>>>>>>>>>>
>>>>>>>>>>Suppose that you are 3 lines away from approximately the same result.
>>>>>>>>>>
>>>>>>>>>>BTW, I have had a crafty version score 300/300 with the same time controls.
>>>>>>>>>>
>>>>>>>>>>The only difficult problem in this set is WAC.230.
>>>>>>>>>>
>>>>>>>>>>I think that WAC is a great set to start working with on a chess engine.
>>>>>>>>>>After a few months you are going to graduate to something tougher.
>>>>>>>>>>
>>>>>>>>>>I guess that there is some simple bug that is costing you 80% or more of the
>>>>>>>>>>misses.
>>>>>>>>>>
>>>>>>>>>>It sounds like an advanced engine from the things I have read so far.
>>>>>>>>>>
>>>>>>>>>>I think I saw a list of the missed problems by your program.  I guess that I may
>>>>>>>>>>see a theme problem when I go over them.
>>>>>>>>>
>>>>>>>>>Thanks - I look forward to those comments at your availability.
>>>>>>>>>
>>>>>>>>>I reran the whole 300 suite at 10 seconds each this evening,
>>>>>>>>>due to Bob's comment, and pulled the failed 29 (Bob failed 9
>>>>>>>>>I believe.)
>>>>>>>>>
>>>>>>>>>Then I reran against just my 29 and found that only 24 failed the
>>>>>>>>>second time, at the same time control.
>>>>>>>>>
>>>>>>>>>This tells me that there is something about the test that is not
>>>>>>>>>reproducible, based on either the ordering of the tests in the suite
>>>>>>>>>or aspects being carried from test position to test position (hash
>>>>>>>>>tables, history heuristic, etc.) I am not sure what it is.
>>>>>>>>>
>>>>>>>>>To test this theory, I took the 29 that failed of which only 24 failed
>>>>>>>>>the second time and reversed them so that the last came first and retested.
>>>>>>>>>This time 4 of the 29 were solved instead of 5. This difference of one is
>>>>>>>>>too small to claim an ordering result for just 29 position sample size.
>>>>>>>>>
>>>>>>>>>Still this indicates that instead of failing 29 I am failing 24-25
>>>>>>>>>and I am not sure what would cause it. Before every iterative deepening
>>>>>>>>>ply 1 search, I clear out the history heuristic table, the hash tables,
>>>>>>>>>and the principal variation arrays.
>>>>>>>>>
>>>>>>>>>Still that is only 4-5 more leaving 24-25 left, arguably 15-16 if one
>>>>>>>>>wants to aspire as high as Crafty.
>>>>>>>>>
>>>>>>>>>I am at a total roadblock on the subject. As I mentioned, I will be
>>>>>>>>>putting money where my mouth is and making a signficant donation to
>>>>>>>>>the board sponsors for guidance to a solution of gaining say another
>>>>>>>>>10 right above my current 271 at 10 seconds. (Hopefully that's legal
>>>>>>>>>here.)
>>>>>>>>
>>>>>>>>This sounds like a bug.
>>>>>>>>If you analyze a new position, you should definitely clear the hash table
>>>>>>>>between analysis runs.
>>>>>>>
>>>>>>>Note:
>>>>>>>A good way to know you should clear the hash is if the node you are given to
>>>>>>>analyze is not a pv node in the hash table.
>>>>>>>
>>>>>>>If the node is a PV node in the hash table, then don't bother to clear it.
>>>>>>>
>>>>>>
>>>>>>This last note went right over my head.
>>>>>>
>>>>>>I have a routine, called iterate(), which does an iterative deepning
>>>>>>PVS search using a for loop. Prior to the for loop it clears the hash
>>>>>>table, the history heuristic table and the pv tables to prevent
>>>>>>interference between moves on test suites. I should probably make it
>>>>>>not clear the hash table, or maybe even the history heuristic table
>>>>>>or pv if a regular game is being played, but no testing has been done
>>>>>>for that comparison so I just clear everything.
>>>>>>
>>>>>>Maybe you are talking about something else than the above.
>>>>>>
>>>>>>When/where would I clear the hash?
>>>>>>
>>>>>>I take the pv of the last iteration and stuff it into the hash table to
>>>>>>ensure it is there so that the pv can make the PVS search function properly.
>>>>>>When entering a search, I am currently generating all moves, putting the
>>>>>>hash and any pv move at the top, sorting based on see, mvv/lva, and basic
>>>>>>pc/sq, and then searching the top in the list (all attempts to avoid
>>>>>>the genmoves and search the hash move only have not been successful and
>>>>>>all results have always been better for me doing a full move gen.)
>>>>>
>>>>>When you are analyzing a list of positions, clear the hash if someone shoves a
>>>>>position in your face and you don't see it in your hash table as a PV node.
>>>>>
>>>>>Why this is important is if your engine is being played as a UCI engine (e.g.
>>>>>WB2UCI adapter for Winboard engines).  You don't just get a sequence of moves
>>>>>like Winboard, but you get the whole game pushed at you one move at a time.
>>>>>
>>>>>One adapter, which shall remain nameless even sent newgame every time.
>>>>>
>>>>>So if you have a nice hash table full of goodies and someone tells you to clear
>>>>>it, tell them to go and jump in the lake if the position they send to you is a
>>>>>pv node.
>>>>
>>>>This is still above my head.
>>>>
>>>>My hash clearing / pv clearing /heuristic clearing happens at the
>>>>beginning of every iterative deepening cycle before depth=1 is searched.
>>>>
>>>>Next time a cycle begins, it is cleared again.
>>>>
>>>>Are you saying that you wouldn't clear it?
>>>>
>>>>That would make searches from position-to-position interdependent
>>>>and not dependent.
>>>>
>>>>Why would I want to do that?
>>>>
>>>>It is likely I still fully misunderstand you on this point.
>>>>
>>>>If you push a whole series of moves at my program either from a FEN position
>>>>and then a set of moves after that, or from the start of the game, or
>>>>wherever, my program will reinitialize the hashkey and pawnhashkey to
>>>>match exactly what you would find as if you had played out to the final
>>>>position in the entire sequence from the start of the game (I think.)
>>>>
>>>>That seemed wise for me to do but now I am confused.
>>>>
>>>>Perhaps I should not clear my hash between individual test positions
>>>>or main board positions in a game and see what that does. Let me
>>>>do that next.
>>>
>>>ChessMaster clears the hash after every move.  Most programs do not do that.
>>>Normally, you only clear the hash table when you start a new game or when you
>>>get a sudden change in what the position looks like.
>>>
>>>If you are analyzing a list of positions, they might be consecutive moves from a
>>>chess game (indeed, many people use chess engines to analyze their own games one
>>>move at a time.).
>>>
>>>So if your hash table has the pv node from your previous position, why clear it?
>>> You will lose a lot of valuable data.  This (of course) assumes that you have
>>>some way to age hash table entries or some kind of replacement scheme.
>>>
>>>Imagine, for instance, this sequence of EPD records:
>>>
>>>[D]rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - bm e4;
>>>[D]rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq - bm d5;
>>>[D]rnbqkbnr/ppp1pppp/8/3p4/4P3/8/PPPP1PPP/RNBQKBNR w KQkq - bm exd5;
>>>[D]rnbqkbnr/ppp1pppp/8/3P4/8/8/PPPP1PPP/RNBQKBNR b KQkq - bm Qxd5;
>>>[D]rnb1kbnr/ppp1pppp/8/3q4/8/8/PPPP1PPP/RNBQKBNR w KQkq - bm Nc3;
>>>[D]rnb1kbnr/ppp1pppp/8/3q4/8/2N5/PPPP1PPP/R1BQKBNR b KQkq - bm Qa5;
>>>[D]rnb1kbnr/ppp1pppp/8/q7/8/2N5/PPPP1PPP/R1BQKBNR w KQkq - bm d4;
>>>[D]rnb1kbnr/ppp1pppp/8/q7/3P4/2N5/PPP2PPP/R1BQKBNR b KQkq - bm Nf6;
>>>[D]rnb1kb1r/ppp1pppp/5n2/q7/3P4/2N5/PPP2PPP/R1BQKBNR w KQkq - bm Nf3;
>>>[D]rnb1kb1r/ppp1pppp/5n2/q7/3P4/2N2N2/PPP2PPP/R1BQKB1R b KQkq - bm c6;
>>>[D]rnb1kb1r/pp2pppp/2p2n2/q7/3P4/2N2N2/PPP2PPP/R1BQKB1R w KQkq - bm Bd2;
>>>[D]rnb1kb1r/pp2pppp/2p2n2/q7/3P4/2N2N2/PPPB1PPP/R2QKB1R b KQkq - bm Qb6;
>>>
>>>If you clear the hash table between each record, you are throwing away valuable
>>>computations.
>>>
>>>Now, I don't have a problem with clearing the hash table between EPD analysis
>>>records.  But it is not improbable that EPD records are being fed to you in a
>>>sequence from actual game play.
>>>
>>>I hope that it has become more clear now.
>>
>>Yes - thanks - very much so.
>>
>>I turned off the hash clearing between successive main game board
>>positions, whether inbetween regular moves in a regular game, test suite,
>>or otherwise, and reran against WAC.
>>
>>However, the result turned out the same between the before/after:
>>
>>+ 6.93/17.12 80% 241/300 bf=2.18 h/p/q=36.91%/96.16%/0.00% 247.42 26904576
>>89682/1/108742 155075/85257/219021/911366/4054309/102093/0/5831
>>
>>+ 6.90/17.00 80% 241/300 bf=2.25 h/p/q=36.80%/96.15%/0.00% 247.65 26167088
>>87224/1/105662 153051/83007/224589/880048/3944718/99589/0/5788
>>
>>So 241 out of 300 solved at 1 second per position. Total of 247 seconds
>>taken for the suite.
>>
>>It was about 3000 nodes per second faster with hash clearing between
>>positions though.
>
>Better move ordering slows the program down a bit in terms of NPS.  You more
>frequently just search one move and fail high as you should, rather than
>searching more moves before failing high at some node.  Searching more nodes at
>any ply is better for NPS since you don't waste time generating something and
>then avoid searching them.
>
>My NPS goes up here, however, since I try the hash move before I generate
>anything, and if it causes a cutoff I get away with no move generation
>whatsoever, making things go faster rather than slower, so my NPS would go down
>some when clearning hash, not go up...
>
>I'd suggest, however, that you ignore NPS when testing/tuning for tactics.  NPS
>is important when you leave the algorithm alone and just try to optimize for
>speed.  But while optimizing for tactics, the time-to-solution is the only
>useful metric, not NPS.  If time to solution goes down, while NPS goes down as
>well, who cares, since in a real game you are given N seconds to make a move,
>not given a max of M nodes you can search before making a move...
>
>
>
>>
>>Your point is valuable however and I think it could activated (cleared)
>>testing and left inactivated (persistent) for non-testing.
>>
>>Stuart
>>
>>
>>Stuart

Yes -- I hardly ever refer to NPS.

I know that NPS and test suite results are both pooh-poohed here and
advocates recommend lengthy tournament matches in arena against other
programs and against opponents on ICS/FICS. In due time...

Stuart



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.