Computer Chess Club Archives


Search

Terms

Messages

Subject: Re: Non-deterministic behaviour of Deep Shredder - REALLY interesting

Author: George Sobala

Date: 11:36:08 01/01/06

Go up one level in this thread


On January 01, 2006 at 11:17:26, Robert Hyatt wrote:

>On December 31, 2005 at 17:41:27, George Sobala wrote:
>
>>I was playing around with seeing how Deep Shredder performs with different
>>numbers of threads, and was fascinated to discover that the program behaviour
>>becomes completely non-repeatable / non-deterministic once more than one thread
>>is running.
>
>This is perfectly normal, and every parallel search program (that is any good)
>exhibits this same behavior.  It's just something one lives with to obtain the
>improved performance.
>
>
>>
>>With multiple threads, no analysis is the same two times running. The time to a
>>solution of a problem varies wildly from run to run. This may come as no
>>surprise to multi-processing experts amongst you but I was certainly surprised
>>by the magnitude of the differences in time-to-solve between different runs.
>
>We've had threads here in the past about this.  Some positions produce wildly
>varying speedups.  Some produce very consistent speedups.  The only thing
>consistent is that things are consistently inconsistent for the most part. :)
>

Thanks, Robert. I thought I remembered something you had posted before about
this, but had mentally pegged the phenomenon as perhaps creating a 10-15%
variation. The unpredictable 20x speed I have seen astonished me: it is almost
like getting something for nothing - throw 4 threads at the task and there is a
chance you can get 20x performance. Or not. Depends on how the computer is
feeling today.

These multithreaded engines would seem to display interesting characteristics:

They won't play the same game twice, they can have flashes of genius ... or be
inexplicably thick and slow on a certain day.

There is no point trying to benchmark them by running them a single time through
a test suite. The results of a second run could be very different.

Am I correct in assuming that the behaviour of parallel search engines cannot be
emulated? With a single threaded engine, you can emulate its behaviour e.g. by
running an emulator program on a mainframe, a programmable calculator or with a
stadium full of abacus users, or any other Turing machine. Your emulation may be
very slow or very fast, but you will get the same answer in the end. It seems
that this is not true of a parallel search engine. It is not a single Turing
machine but a group of Turing machines the interactions between which are
governed by "random" external influences which can greatly influence the outcome
of the task.



This page took 0 seconds to execute

Last modified: Thu, 15 Apr 21 08:11:13 -0700

Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.