Message boards : Projects : Projects excluding GPU's
Message board moderation
Author | Message |
---|---|
Send message Joined: 24 Jan 08 Posts: 7 |
I'm seeing disturbing pattern of projects that are now excluding computers with NVidia GPU's from GPU-enabled work unless the GPU has more than 256mb of memory. Seti@Home, Milkyway and Einstein are three examples. First, I think it's pathetic that a programmer can't make applications like these work with 256 MEGAbytes of memory. Second, I have a Dell Precision M6300 and I can't upgrade my video. Is there a way to determine which projects will not allow me to participate with my GPU so that I can exclude them entirely from my computer time? |
Send message Joined: 29 Aug 05 Posts: 15563 |
Second, I have a Dell Precision M6300 and I can't upgrade my video. So just because you cannot upgrade your (proprietary Dell) videocard, project developers are pathetic when it comes to programming for these lower videocards? - Seti needs only 254MB on the card, 384MB when you update to video drivers over 190.38 and the CUDA 2.3 Software Development Kit (SDK). The side effect of CUDA 2.3 drivers and above is that they see less memory on the videocard than there really is. This is an Nvidia problem, not a BOINC or project problem. - Einstein has the continuing problem that the FFT algorithm does not fit in memory under 256MB. They've only temporarily set the requirement to 300MB, until they know how much video memory their latest BRP3 application is actually taking up. - Milkyway has always been an ATI project first, anything else later. They require double precision in their calculations, something that your Dell proprietary Nvidia chip cannot do. So you don't even have to fret over this project's requirements. Is there a way to determine which projects will not allow me to participate with my GPU so that I can exclude them entirely from my computer time? Normally BOINC will tell you as soon as you request CUDA work from the project. The project will give an answer back whether or not your GPU is OK. Other than that, check their forums. Einstein has a sticky thread in their Cruncher's Corner forum that shows the latest requirements. Milkyway has a GPU requirements thread in its forums. Seti has a GPU forum where you can ask questions. Be polite, the people answering you are usually volunteers just like you are. |
Send message Joined: 24 Jan 08 Posts: 7 |
Second, I have a Dell Precision M6300 and I can't upgrade my video. |
Send message Joined: 24 Jan 08 Posts: 7 |
So just because you cannot upgrade your (proprietary Dell) videocard... BTW, it is not a "proprietary Dell" video setup. It's an NVidia Quadro FX 1600M. |
Send message Joined: 29 Aug 05 Posts: 15563 |
Yes, that's a Dell proprietary chip. In other words, if you were to ask Nvidia if you could upgrade it with one of their newer products, you'd get an answer to check with Dell as they are the owners of that GPU. Nvidia make their own mobile products. |
Send message Joined: 29 Aug 05 Posts: 15563 |
If that's what you believe, that's pathetic too. But we all know you know exactly what I meant. Yeah, I see you're warming to the surroundings already. Welcome to the shark-tank. But fine, think what you think. Free speech and all that crap. No, it's not an NVidia problem. My system has 256mb and SETI was working just fine. With which drivers? I ask this since the science applications at Seti haven't changed in years. So if things worked before and now they don't, then something on your system changed. Most probably the videocard drivers. Then this is a communication problem on the part of the Einstein team. Instead of alienating volunteers, the message about video memory requirements should explain the reason for the GPU task rejection and give a URL for that, instead of expecting people to figure out by sheer guessing that they should go check the web site. Which is why we're waiting for BOINC 6.12 to be released, which will do these things through the Notification tab. But aside from that, the information is out there, 3 mouse clicks away in the forums. And if you don't know, you can ask. Or use the project's search. |
Send message Joined: 11 Mar 11 Posts: 1 |
Yes, that's a Dell proprietary chip. In other words, if you were to ask Nvidia if you could upgrade it with one of their newer products, you'd get an answer to check with Dell as they are the owners of that GPU. Nvidia make their own mobile products. Not meaning to offend here, but you may wish to look up the meaning of "proprietary" before calling a mobile GPU from one manufacturer the sole design and production of the company that merely uses it. Dell only used the mobile GPU that NVidia created for laptop and low-power systems. How many implementation of those chips do *you* know have that as an upgradable module in *any* system built around such technology? Plus, NVidia doesn't produce laptops for sale. So, of course, they'd point people to the laptop maker regarding hardware upgrade options. That doesn't mean the laptop maker designed the chip. With that said, I do understand that mobile GPU limitations will prevent their use for certain applications. Generally speaking, mobile GPUs don't even have dedicated video RAM (it's usually shared system memory). Now, while I agree that there are limits, the claim of project developers being pathetic isn't entirely false. As a long time computer user, I see the simplest app taking far more memory than it should. Programmers are no longer taught efficiency and tend to, unintentionally, make memory hogging applications. It's more a fault of the education system that trains programmers than the programmers personally. Just as an FYI, I don't just use computers and see how sloppy things get. For 30+ years, I have also been a programmer (mostly hobbyist stuff). I learned how to write small code by necessity (8-bit systems didn't even have megabytes of RAM). While GUI programming tends to bloat an application, things that have no GUI have little excuse for being overly memory hoggish. EDIT: missed a typo |
Send message Joined: 17 Jun 11 Posts: 8 |
These two BOINC projects have low enough requirements on what GPUs they can use that you might want to look at them before giving up: http://boinc.thesonntags.com/collatz/ http://www.primegrid.com/ Also, I've found that Rosetta@Home (and therefore Ralph@home) has announced that they have dropped their plans to convert their current application to run on GPUs, since it's so serial in nature that it would actually run slower on GPUs than on more CPUs. WCG isn't saying much about their plans, but appears not especially interested in GPUs yet. superlinkattechnion appears to have lost enough of their developers that they can no longer consider reprogramming to allow GPUs even if they want to. A few other projects appear likely to consider adding GPU applications, but not until a suitable 6.14.* version of BOINC is available. I expect Poem@Home to be among the first of these. Note - this web page does not list that model of graphics board, but does offer some information on what to expect for similar models: http://www.nvidia.com/page/quadrofx_family.html |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.