Message boards : BOINC Manager : BOINC Bench for GPU
Message board moderation
Author | Message |
---|---|
Send message Joined: 12 Dec 08 Posts: 4 |
BOINC has a bench for CPU (Dhrystone and Wheatstone)...why there isn't a similar bench for GPU? |
Send message Joined: 29 Aug 05 Posts: 15563 |
I just had a chat with one of the developers. He told me that GPUs report back what speed they run at when a program asks for it. BOINC doesn't store this information anywhere as BOINC will only query the card(s) once when BOINC starts up. The science application then handles the rest. Basically BOINC tells the application which device to use, and then the applications asks the card how fast it is and then adjusts accordingly. |
Send message Joined: 5 Oct 06 Posts: 5129 |
I just had a chat with one of the developers. He told me that GPUs report back what speed they run at when a program asks for it. BOINC doesn't store this information anywhere as BOINC will only query the card(s) once when BOINC starts up. The science application then handles the rest. That's a cop-out by the developers. Exectly the same thing could be said of the CPU speeds: CPUs will reply (or allow it to be calculated by experiment) if BOINC asks. BOINC has chosen to ask for that information, and to store/use the data in work fetch calculations. How does BOINC calculate the initial work fetch for a GPU/CUDA project application? |
Send message Joined: 29 Aug 05 Posts: 15563 |
Better add a lot of questions on the subject of how BOINC deals with CUDA that I can put toward the developers at one time. I'm not going to ask them one question at a time. |
Send message Joined: 5 Oct 06 Posts: 5129 |
Better add a lot of questions on the subject of how BOINC deals with CUDA that I can put toward the developers at one time. I'm not going to ask them one question at a time. Good idea, but I don't think it's the right way to go. I'm not a programmer (at least, not at this level): all I can do is monitor what's happening in different places, and sometimes act as a messenger if a topic crops up in one place and has relevence somewhere else. Be aware that SETI@Home has started experimenting with a CUDA GPU application - very much a Beta app, and only available at their Beta project. This has generated a new wave of interest in the CUDA platform. SETI is, of course, open-source, and there has been a tradition over the years of various independent developers producing optimised versions of the SETI apps. The current torch-bearers in that tradition are the 'Lunatics', based at http://lunatics.kwsn.net/index.php. They had already started on their own (unsupported by NVidia) CUDA transfer for both SETI and Astropulse (the current official Beta is for SETI only). They have just been sent an official copy of the SETI CUDA Beta source code (the live repository is apparently currently unavailable for some reason), and I suspect they will be poring over it and coming up with all sorts of questions and suggestions at a phenomenal rate - probably all through the holiday season. What would be an efficient way to synergise these two parallel development efforts? The official SETI project development team are showing signs of severe overstrech - their Astropulse re-release has not gone well, and simple niggles have gone unaddressed for months. So I don't think we can ask or expect them to act as co-ordinators for BOINC issues. Me watching from the sidelines (I have access to the Lunatics pre-release test forums where much of the discussion takes place), passing questions to you, passing questions to David/Rom, and vice-versa, isn't going to work either. I don't (yet) even have a CUDA-capable graphics card, so I can't even participate in the testing and reproduce issues in a way I can report here. I think we need to find a new way of joining Berkeley (both BOINC and SETI) up with the rest of the world, but I don't know what it is yet. With regard to specific questions: the Lunatics team have come up against the same 'incorrect priority' issue in BOINC v6.4.5 as BeemerBiker: their workround is to use Process Lasso to manipulate the priorities of the three individual threads within the SETI Beta application. Some people seem to be increasing priorities, others decreasing them: anything except the default works, it seems! Another issue which is going to come up again and again and again is the thorny old subject of credit. I think BOINC has a duty to the projects to take a firm grip on this, and ensure that the credit-granting behaviour of CUDA-BOINC, is clear and transparent to users before a new round of credit wars breaks out. Here are two genuine examples from SETI of credit confusion which have been posted in the 60 hours or so since the Beta release: both wrong, contradictory, but will take up a lot of project time if BOINC doesn't get the central core right. I'm expressing both these points in my own words - the original posts were a bit difficult to follow, but I think I've caught the spirit of the original authors. "This new application can do the work in 60 seconds - it used to take 3,000 seconds. My RAC is going to be astronomical. Then SETI is going to have to reduce the credit per WU for all participants, to maintain cross-project parity." FALSE. The new SETI app can't do the work in 60 seconds. BOINC only reports the CPU time - the GPU and elapsed times (both unreported) are much longer. His RAC won't rise by anything like the proportion he's expecting. "This new application can do the work in 60 seconds - I'll get far less credit than everybody else, because credit is based on time, right?" FALSE - credit at SETI is based on a crude approximation of the number of FLOPs the calculation takes, and it shouldn't make any difference whether those FLOPs are performed on a CPU or a GPU. But what would be the credit awarded if a project still using benchmark*time credit released a CUDA app? Especially - to return to the original topic - since BOINC supplies neither a GPU benchmark nor the GPU time. |
Send message Joined: 9 Apr 06 Posts: 302 |
Regarding credit part of Richard's post. Actually this app work about 2h on QUAD 2,66GHz+ GeForce 9400GT and with RealTime process priority of CUDA app (worker thread priority is 16 in that case) These small, ~1min, times is CPU times ONLY not relevant measure of any work for CUDA app at all. So credit boost certainly will be (cause CUDA can run together with CPU apps) but don't expect it to be too high. CPU completes same tasks in less than 1 hour on the same host. |
Send message Joined: 30 Dec 05 Posts: 470 |
Like Richard I am only observing what is going on, due to lack of gpu hardware. But this lack of gpu benchmarks and gpu time is going to play havoc in two area's. The first is DCF, reporting that CUDA task only took a few minutes when if same task was processed in CPU it would take an hour or so, will lead to some very weird work fetch problems by the scheduler. This has already been observed and reported. Father Ambrose And the second relates to the credits, on first sight it shouldn't as Seti uses Flop counting, BUT Eric runs his cross project parity program that compares the flop counted credits to the performance of median computer using BM * Time. And adjusts the credits per Flops accordingly. Therefore if the gpu time is not reported, then the median time for all multibeam tasks will decrease and consequently the credits will have to be decreased to maintain parity. |
Send message Joined: 19 Jan 07 Posts: 1179 |
I just had a chat with one of the developers. He told me that GPUs report back what speed they run at when a program asks for it. BOINC doesn't store this information anywhere as BOINC will only query the card(s) once when BOINC starts up. The science application then handles the rest. BOINC developers lately don't care about doing things right, just doing it fast, because they got into some sort of deadline to get GPU support out the door. Work fetch is currently quite broken even for non-GPU apps due to that. |
Send message Joined: 15 Dec 08 Posts: 2 |
Regarding credit part of Richard's post. GPUGRID.net are pioneers when it comes to boinc+cuda. They learned a trick or two from an earlier pioneers, folding@home. This is one result: http://www.gpugrid.net/result.php?resultid=158852 and they've managed to send back both CPU time and GPU time, in that case, 9000 seconds of CPU and 25,000 seconds of GPU. Or approximately 2.5 hours and 7 hours respectively. They've kept their user base satisfied At 3,200 credit per WU, regardless of the GPU speed. |
Send message Joined: 5 Oct 06 Posts: 5129 |
Regarding credit part of Richard's post. That's an excellent start - I'll suggest it over at SETI Beta (the more information, the better). But all they've done is add an extra information line into <stderr_txt> - that's not the same as formally recording it in the BOINC client metrics for re-use in work fetch, DCF, and credit functions. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.