Thread 'Can I run BOINC programs on PCI (Not PCI-E) GPUs?'

Message boards : GPUs : Can I run BOINC programs on PCI (Not PCI-E) GPUs?
Message board moderation

To post messages, you must log in.

AuthorMessage
DavidVR

Send message
Joined: 14 Jun 11
Posts: 6
United States
Message 38415 - Posted: 14 Jun 2011, 8:39:57 UTC

I don't expect spectacular performance, but I want to maximize my contributions to BOINC projects that use GPUs, like Collatz conjecture or MilkyWay@home. Is there any way to get these projects to run on a PCI GPU?
ID: 38415 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5129
United Kingdom
Message 38416 - Posted: 14 Jun 2011, 9:07:50 UTC - in response to Message 38415.  
Last modified: 14 Jun 2011, 9:26:40 UTC

I don't expect spectacular performance, but I want to maximize my contributions to BOINC projects that use GPUs, like Collatz conjecture or MilkyWay@home. Is there any way to get these projects to run on a PCI GPU?

Yes, the bus doesn't matter to BOINC. You'd need to keep an eye on other limitations and specifications:

Check the minimum driver level at the projects.
Likewise, check the minimum video RAM requirement (cards with only 256 MB RAM are pretty much restricted to Collatz and SETI).
I'd be slightly surprised to see a double-precision card (required by MilkyWay) on a PCI bus, but I haven't researched that.
Projects which shift a lot of data to and from the GPU, like Einstein, might be slowed down by the older bus.

Edit - forgot to state the obvious: it has to be a supported card - 'CUDA' for nVidia (check on http://www.nvidia.co.uk/object/cuda_gpus_uk.html). People have recently found 8400GS and 9500GT cards in PCI format, though they're not sold in all countries. I'm afraid I don't know the equivalent data for ATI.
ID: 38416 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15565
Netherlands
Message 38420 - Posted: 14 Jun 2011, 9:59:53 UTC - in response to Message 38416.  

I'm afraid I don't know the equivalent data for ATI.

Some HD3850s and HD4650/4850s are available in AGP format. Mostly from Sapphire and Powercolor.
ID: 38420 · Report as offensive
Claggy

Send message
Joined: 23 Apr 07
Posts: 1112
United Kingdom
Message 38428 - Posted: 14 Jun 2011, 13:25:13 UTC - in response to Message 38415.  

Both Seti and Collatz work on my PCI 512Mb 8400 GS no problem (on 266.58 drivers, there are problems with 27x.xx and later drivers)

And Collatz works no problem on my Asus AGP HD4650 too, and is vastly faster than the 8400 GS.

Claggy
ID: 38428 · Report as offensive
Tattoo Buckeye

Send message
Joined: 17 Jul 11
Posts: 15
United States
Message 39069 - Posted: 18 Jul 2011, 13:47:53 UTC - in response to Message 38428.  

Both Seti and Collatz work on my PCI 512Mb 8400 GS no problem (on 266.58 drivers, there are problems with 27x.xx and later drivers)

And Collatz works no problem on my Asus AGP HD4650 too, and is vastly faster than the 8400 GS.

Claggy


Claggy,

I'm trying to use a PCI 512Mb 8400 GS with SETI, but without success. I'm using 275 drivers as they are the only ones in my distro. My system does not find my GPU. Could you please elaborate on the problems you've seen with the 275 drivers?

Thanks
ID: 39069 · Report as offensive
Claggy

Send message
Joined: 23 Apr 07
Posts: 1112
United Kingdom
Message 39088 - Posted: 18 Jul 2011, 22:16:46 UTC - in response to Message 39069.  
Last modified: 18 Jul 2011, 22:19:31 UTC

Both Seti and Collatz work on my PCI 512Mb 8400 GS no problem (on 266.58 drivers, there are problems with 27x.xx and later drivers)

And Collatz works no problem on my Asus AGP HD4650 too, and is vastly faster than the 8400 GS.

Claggy


Claggy,

I'm trying to use a PCI 512Mb 8400 GS with SETI, but without success. I'm using 275 drivers as they are the only ones in my distro. My system does not find my GPU. Could you please elaborate on the problems you've seen with the 275 drivers?

Thanks

That was under Windows, with 27x.xx (Cuda 4) drivers there are downclocking problems because the existing Windows Cuda apps aren't thread safe,
Read Jason's post for the full low down, i don't think it applies to Linux, and i didn't even try 27x.xx drivers because of those problems:

OK I will. It's quite involved, but I'll try detail first then explain further if needed.

Certain new methods that Cuda4 drivers deal with memory & Cuda transfers are sensitive to being abrubtly terminated without warning. All Windows-Boinc-Cuda app releases to date use boincApi code for their exit code, given that Boinc needs to tell applications through this channel when to snooze/resume/exit etc, as well as when the worker needs to exit normally.

Symptoms directly pertaining to effects using Cuda 4 drivers with current Boinc-Cuda applications are primarily the 'sticky downclock' problem, but also other forms of unexplained erroring out.

There are other non-Cuda related symptoms visible across non-Cuda (CPU) applications as well, most visible being truncation or erasure of the stderr.txt contents, and less visible possibly checkpoint & result files as well.

These sorts of symptoms, being apparently related to how 'nicely' the program treats the active buffer transfers when the application shuts down, seemed to be statisically more common on lower bus/memory speed systems, probably as a result of the transfers etc taking longer (i.e. higher contention).

The trial solution in testing is to install exit code within boincAPI that 'asks' the worker thread (that feeds the Cuda device etc) to shut down 'nicely', so that it can quickly finish what it is doing & tidyup before being 'killed'. At present this seems effective at preventing the downclock problem & possibly the stderr/etc truncation symptoms as well, though we're poking at it to look for unexpected issues at this time. I've relayed as much information as I can to Berkeley & will leave it in their hands.

If you experience the downclock problems, there are currently 2 options I'm aware of:
- Downgrade to driver 266.58 which is not as senstivie to its tasks being summarily executed, or
- Determine if it's a situation where you absolutely need the fix now: That would only be a possiblity for this Project (Other projects don't have the fix yet & may not be even aware of the issue), and only under special circumstances, as it would involve pre-alpha testing unproven code. We are a bit overworked at the moment with V7 & other development considerations, So please don't expect a rush release of this uproven code.

In any case, high throughput hosts are statisically less susceptible to this problem, so It is quite possible many hosts don't see the symptoms appear even with newer drivers & existing applications.

HTH, Jason


Claggy
ID: 39088 · Report as offensive

Message boards : GPUs : Can I run BOINC programs on PCI (Not PCI-E) GPUs?

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.