Author: Robert Hyatt
Date: 09:44:33 08/30/01
Go up one level in this thread
On August 30, 2001 at 11:51:40, Mark Rawlings wrote: >On August 30, 2001 at 10:23:28, Robert Hyatt wrote: > >>On August 30, 2001 at 02:04:03, Derrick Daniels wrote: >> >>>On August 29, 2001 at 22:07:47, Robert Hyatt wrote: >>> >>>>On August 29, 2001 at 19:07:33, Derek Mauro wrote: >>>> >>>>>On August 29, 2001 at 15:43:32, Robert Hyatt wrote: >>>>> >>>>>>On August 29, 2001 at 15:36:54, Uri Blass wrote: >>>>>> >>>>>>>On August 29, 2001 at 15:21:09, Robert Hyatt wrote: >>>>>>> >>>>>>>>On August 29, 2001 at 14:41:48, Mark Young wrote: >>>>>>>> >>>>>>>>>On August 29, 2001 at 14:03:49, Robert Hyatt wrote: >>>>>>>>> >>>>>>>>>>On August 29, 2001 at 13:52:33, Uri Blass wrote: >>>>>>>>>> >>>>>>>>>>>On August 29, 2001 at 12:52:15, Roy Eassa wrote: >>>>>>>>>>> >>>>>>>>>>>>This sentence DOES say a lot, doesn't it: >>>>>>>>>>>> >>>>>>>>>>>>"By the summer of 1990--by which time three of the original Deep Thought team >>>>>>>>>>>>had joined IBM--Deep Thought had achieved a 50 percent score in 10 games played >>>>>>>>>>>>under tournament conditions against grandmasters and an 86 percent score in 14 >>>>>>>>>>>>games against international masters." >>>>>>>>>>>> >>>>>>>>>>>>That was 7 years before, and many-fold slower hardware (and much weaker >>>>>>>>>>>>software, no doubt), than what played Kasparov in 1997. >>>>>>>>>>> >>>>>>>>>>>No >>>>>>>>>>>This sentence tells me nothing new. >>>>>>>>>>> >>>>>>>>>>>I know that humans at that time did not know how to play against computers like >>>>>>>>>>>they know today. >>>>>>>>>>> >>>>>>>>>>>Today programs got clearly better results than deep thought >>>>>>>>>>>and there is more than one case when they got >2700 performance inspite of >>>>>>>>>>>the fact that the opponents could buy the program they played against them >>>>>>>>>>>something that Deep thought's opponents could not do. >>>>>>>>>> >>>>>>>>>>Deep thought produced a rating of 2655 over 25 consecutive games against a >>>>>>>>>>variety of opponents. None of them were "inexperienced" in playing against >>>>>>>>>>computers. Byrne. Larson. Browne. You-name-it. That argument doesn't hold >>>>>>>>>>up under close scrutiny. >>>>>>>>> >>>>>>>>>In some ways, it appears that the GMs of today are >>>>>>>>>>prepared far worse than the GMs of 1992 were prepared to play computers. >>>>>>>>> >>>>>>>>> >>>>>>>>>I don?t see how GM?s of today are less prepared to play computers. Anyone of >>>>>>>>>them can and has played computer programs at home stronger then the programs of >>>>>>>>>the early 1990?s. >>>>>>>> >>>>>>>>I am basing that on the games I have seen, plus the important detail that in >>>>>>>>1992, strong GM players at the US Open, the World Open, and other events >>>>>>>>(particularly those in the northeast US) knew they would be facing Hitech, >>>>>>>>Deep Thought, and at times, Belle and others. Since 1995 this has not been >>>>>>>>the case as it is nearly impossible to find a tournament in the US that will >>>>>>>>allow a computer to compete. If they aren't going to face the machines, they >>>>>>>>aren't going to study them. >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>>I don?t think preparation is the problem. It is the strength of the programs of >>>>>>>>>today. It seems if you are not in the top 100 of the Fide list your chances of >>>>>>>>>besting the better programs is not very good. >>>>>>>>> >>>>>>>>>It seems clear that the programs of today are stronger then Deep Thought of 1992 >>>>>>>>>that produced a rating of 2655 playing against "Byrne. Larson. Browne. >>>>>>>>>You-name-it". Do you agree with this? >>>>>>>> >>>>>>>> >>>>>>>>No I don't. I would agree that probably they programs of today are in the >>>>>>>>same league with Deep Thought of 1992, maybe. At least on the 8-way boxes. >>>>>>>>Their NPS speed would be similar. Deep Thought wasn't known to be an incredibly >>>>>>>>"smart" program, neither are today's programs. >>>>>>> >>>>>>> >>>>>>>I consider the top programs of today as clearly smarter than Deep thought. >>>>>> >>>>>>Based on what? Top programs of today _still_ seem to be unable to understand >>>>>>simple chess concepts like the pawn majority we have been discussing in another >>>>>>thread. I discovered, by bits and pieces, some of the knowledge in deep >>>>>>thought, and it was not "small" at all. Everyone assumes that the micros are >>>>>>much smarter... and that us old supercomputer guys simply depended on raw speed >>>>>>to win games. If you look at the game Cray Blitz vs Joe Sentef, from 1981, >>>>>>you will find a position that many programs today will blow, and that programs >>>>>>of 5 years ago would totally blow (bishop + wrong rook pawn ending knowledge). >>>>>>We weren't "fast and dumb" at all. Neither was DT, DB or DB2. Fast, yes. But >>>>>>definitely not "dumb". The "intelligence" of todays programs is mostly myth >>>>>>brought on by fast hardware that searches deep enough to cover for some of the >>>>>>positional weakness the programs have. >>>>> >>>>>If DB was "smarter" than today's programs (and I believe you that it was), and >>>>>you consider today's programs not to be super-intelligent, why is it that we >>>>>haven't been able to make smarter programs? It makes perfect sense that in 4 >>>>>years we should have made more progress. Did the DB guys just know a hell of a >>>>>lot more than we have figured out, or is it that because of some hardware issue >>>>>we just can't implement everything, or something else? >>>>> >>>> >>>> >>>>Building a chess program is very much like balancing a high-performance boat >>>>on the pad at 80 MPH. It takes a very good sense of balance, touch, and skill. >>>>In a chess engine, you have to balance speed vs smarts. Sometimes you have to >>>>sacrifice one for the other to fix a specific problem. Too often, the smarts >>>>has to take a back seat to speed or the smart program is too slow and gets >>>>ripped apart tactically. DB didn't have to make such compromises. In hardware, >>> >>>Hi Bob >>> >>> >>> Just an uninformed thought...What if the Deep blue team implemented some of the >>>compromises the Micro programmers make, or adopted pruning, and null move >>>techniques, wouldn't Deep blue have been even Stronger and have a greater Depth >>>of Search?? I don't have enough computer chess understanding to know if this >>>question makes sense, but it was just a thought. >> >> >>Let's stick to my boat analogy for the moment. I'm currently running a 28" >>pitch prob, to reach a top speed of around 85 miles per hour. I want to be >>able to outrun my friends on top-end, and I _also_ want to be able to beat them >>in a zero-to-sixty miles per hour race. To do that I would probably run a >>24" pitch prop for better acceleration. But I have to compromise. best top >>speed might be 30" pitch, best acceleration might be at 24" pitch. I pick >>something in the middle to give me the best of both words. >> >>Now for deep blue. They had more money to spend than I do. So they go off and >>build a variable-pitch prop that starts off at 22" pitch, and progresses to 30" >>at high rpms. Their special hardware solution blows me away in the drag >>race, it blows me away in the top-end race. And it blows me away at anything >>in between. Because they didn't have to make a compromise since they were >>designing hardware to do _exactly_ whatever the task at hand was. >> >>In DB, they don't _need_ to make compromises as we do in software programs. >>Doing so would make no sense at all... They simply do whatever they want, >>and they make it fast due to the hardware... >> >> > >I think the point was: wouldn't they be even _better_ with, for example, >null-move? I would think the extra 2 or 3(?) ply would have been very helpful, >just as it is with todays micros (even though it is a compromise...) > >Mark > I don't know. My feeling is that it would be better. But null-move is just a shortcut way to reach deeper depths, with an associated penalty that we put up with. But if you look at DB2's depths, they were reaching beyond 15 plies without null-move. Is it really necessary to go 2-3 plies deeper by using null-move and take all the risks that null-move introduces? That question is not one I can answer without having the hardware to test it. IE the last time I did a null-move vs non-null-move search comparison by playing a bunch of games, null-move won the match, but not by a huge margin. Maybe a 50 Elo point advantage or so. This was played at a time when I was searching about 70K nodes per second on a pentium pro 200. Today I am over 10X faster. Would the null-move program win by a bigger or smaller margin? I don't know. And if I could go 16 plies without null-move, and 19 with, would the 19 win? Since I can't run that experiment I have no idea. Hsu could run it. Whether he did or not I don't know. They definitely had a null-move version of deep thought as they wrote a paper about the algorithm. But they didn't (to the best of my knowledge) use it in any actual games they played. As I said before, they work in a different world than we do. Vincent criticizes their lack of hashing in the hardware. But he doesn't think about the size of the hash table you need if you can search up to 1 billion nodes per second, peak. That is 16 gigabytes per second of search you need, which is far beyond any memory I am aware of. So their lack of hashing in hardware actually might be a good thing and not a weakness, at the speed/depth they are searching to. It is (to me) very hard to take what we do on a PC, and try to extrapolate how it would behave at 1000X faster. > >> >> >>> >>> >>>>you can do as much as possible in parallel, and adding another parallel slice >>>>of computation doesn't slow it down at all unless you overflow the adder tree >>>>and are forced to add another level. >>>> >>>>IE there are lots of things I would _like_ to do in Crafty, but most of them >>>>hurt overall speed. And too much of that kills the overall skill of the >>>>program. If I could design the engine, knowing that anything I do is not going >>>>to crush search speed, I would have a _far_ different search engine than I do >>>>today. >>>> >>>> >>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>>Deep thought had also a problem in the repetition detection and I believe that >>>>>>>the search algorithm of the top programs of today is superior because Deep >>>>>>>thought did not use null move or other pruning methods. >>>>>> >>>>>>There is nothing that says you must use forward-pruning methods to write a >>>>>>strong program. Nothing at all. DT had repetition problems in the chess >>>>>>hardware, yes. But in _spite_ of that it played like a super-GM. DB and DB2 >>>>>>had no such problems. >>>>>> >>>>>> >>>>>>> >>>>>>>Uri
This page took 0.01 seconds to execute
Last modified: Thu, 15 Apr 21 08:11:13 -0700
Current Computer Chess Club Forums at Talkchess. This site by Sean Mintz.