Computer Chess Club Archives


Search

Terms

Messages

Subject: Non-deterministic behaviour - The Road to a Heaven of Genius?

Author: Rolf Tueschen

Date: 04:29:01 01/02/06

Go up one level in this thread


On January 01, 2006 at 14:36:08, George Sobala wrote:

>On January 01, 2006 at 11:17:26, Robert Hyatt wrote:
>
>>On December 31, 2005 at 17:41:27, George Sobala wrote:
>>
>>>I was playing around with seeing how Deep Shredder performs with different
>>>numbers of threads, and was fascinated to discover that the program behaviour
>>>becomes completely non-repeatable / non-deterministic once more than one thread
>>>is running.
>>
>>This is perfectly normal, and every parallel search program (that is any good)
>>exhibits this same behavior.  It's just something one lives with to obtain the
>>improved performance.
>>
>>
>>>
>>>With multiple threads, no analysis is the same two times running. The time to a
>>>solution of a problem varies wildly from run to run. This may come as no
>>>surprise to multi-processing experts amongst you but I was certainly surprised
>>>by the magnitude of the differences in time-to-solve between different runs.
>>
>>We've had threads here in the past about this.  Some positions produce wildly
>>varying speedups.  Some produce very consistent speedups.  The only thing
>>consistent is that things are consistently inconsistent for the most part. :)
>>
>
>Thanks, Robert. I thought I remembered something you had posted before about
>this, but had mentally pegged the phenomenon as perhaps creating a 10-15%
>variation. The unpredictable 20x speed I have seen astonished me: it is almost
>like getting something for nothing - throw 4 threads at the task and there is a
>chance you can get 20x performance. Or not. Depends on how the computer is
>feeling today.
>
>These multithreaded engines would seem to display interesting characteristics:
>
>They won't play the same game twice, they can have flashes of genius ... or be
>inexplicably thick and slow on a certain day.
>
>There is no point trying to benchmark them by running them a single time through
>a test suite. The results of a second run could be very different.
>
>Am I correct in assuming that the behaviour of parallel search engines cannot be
>emulated? With a single threaded engine, you can emulate its behaviour e.g. by
>running an emulator program on a mainframe, a programmable calculator or with a
>stadium full of abacus users, or any other Turing machine. Your emulation may be
>very slow or very fast, but you will get the same answer in the end. It seems
>that this is not true of a parallel search engine. It is not a single Turing
>machine but a group of Turing machines the interactions between which are
>governed by "random" external influences which can greatly influence the outcome
>of the task.


I'm unhappy with your choice of speech. Feeling well, genius, group of Turing
machines, greatly influence the outcome - do you really expect that a computer
scientist begins to judge your wordings? You are good enough into this to know
well enough that your speculations of something genial just through the
indeterministic aspects of parallel processors - are wrong! Or you oversee the
always existing aspect of playing good moves by "chance", without conscience, or
even for the wrong reasons. If you mean that by parallel processoring the
hazardious come better into computerized chess, well even this is an insinuation
the computerchess programmers couldnt be happy with because it meant that they
dont have an ordinary solution got good chess moves. Also then you cant expect a
decent answer from the experts here. The aspect of troll comes to mind, although
I do never use it but here I think I gave a couple of reasons for that verdict.
Excuse me for pointing out the obvious.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.