I don't know why I keep messing around with RC5, but I do. So sad to say but my BFG 9800GT on XP 32-bit outperforms my BFG GTX 260 on XP-64, both running on the CUDA 2.1 beta7 client, and that shouldn't happen. First I thought maybe it was an issue with the Intel M/B, so I installed the XP-64 on an AMD X2 system on a gigabyte m/b. Same performance. Under the earlier beta 5 linux cuda 2.0 client the 260 was running 7,200 units a day no problem while the the 9800GT was getting 5 something.
So in their wisdom to fix something, distributed.net improved the stuff, can't do the cuda 2.0 client anymore. And now I'm wondering if XP-64 is the culprit??? And it is a strange one, as if the systems are started off together, the 260 runs faster than the 9800, however in the morning, the summary will show more units processed on the 9800 than the 260. All the power control stuff has been disabled on both systems. Possibly it's a heating issue, but I sort of doubt that as if I start the systems over again the 260 again runs faster than the other. ?????
So in my quest, I went and downloaded the latest NVIDA driver for XP-64, and ran the CUDA 2.2 client..... performance was worse on that than with the 2.1 client, like 40,000,000 keys/sec less than with the cuda 2.1 client for the highest optimized cores on both clients going by the dnetc -bench check. So I reinstalled the cuda 2.1 client last night. This morning the 260 had done 500 less units than the 9800GT had.
I also tried installing ubuntu 8.10 and then downloaded the .run files for the cuda 2.1 client, and then installing those. Things went fine but upon rebooting the X server would only run in low graphics mode and I couldn't get that to reconfigure. I do have to say that the linux method totally sucks compared to windows for changing graphics drivers.
Bottom line is that it appears to me that distributed.net has gone to the point of trying to "fix" something that was working from the linux standpoint nice and reliably under the cuda 2.0. So in the quest for the perfect windows client and desire to have only one client source code for all, things are going down hill from the performance standpoint. Not to mention that rather than having things at the point that anyone can run the client, you have to be on the bleeding edge as the linux distros don't support the latest cuda offerings. So in the end, they lose people that just get tired of the constant dinking around with the system and move on to other efforts.
Yet at the same time there are other projects utilizing gpu's that you don't have to jump through as many hoops to get their project running. ???? I don't know, as it's all a little beyond me other than knowing that not too long ago I was doing a good 23,000 units a day on rc5, and now I do maybe 10,200 units a day, on the days that I still crunch rc5. And after the delays in getting new beta releases issued after the expiration of the then current betas, less and less rc5 units are being crunched by my systems. I also note that others out there have been begging for a release candidate as work is getting done well enough by the betas that the completed work units are added to the pool of the completed work for the project. So the end is that they are losing contributors due to the frustrations which isn't exactly the direction most project managers try to go with their projects.
It's a rant in a way, but really, it's more just pure and simple frustration. We've had several projects disappear and the sum of it all is that the crunching just isn't as fun as it used to be. And while the OGR series of projects by distributed.net are going great guns, rc5 isn't doing well in comparison. Just my .02 cents not that it's worth anything.