Message boards : GPUs : Different projects and PCI-e bandwidth usage
Message board moderation
Author | Message |
---|---|
Send message Joined: 11 Sep 13 Posts: 57 |
I've been having some discussions on several sites regarding GPUs and bandwidth usage for distributed-computing projects, and I wanted to broaden things by hopefully getting some BOINC experts in on the matter. Now most of us are probably familiar with GPU mining and how hardware is generally deployed in these farms, but for anyone who isn't too familiar with it, a mining farm is typically comprised of a cheap motherboard with as many PCI-e slots of any size, some basic RAM, a cheap CPU, and of course, the GPUs. Due to space limitations, these GPUs are normally connected to the motherboard via a flexible riser which is an x1 adapter at the motherboard end, an x16 adapter on the GPU, and a USB 3.0 cable connecting the two to each other. Essentially, these are PCI-e x1 extension cables. They do not actually use a USB interface. a 3.0 cable is used simply because it has the right number of wires inside to map to an x1 interface. Now, given that these risers are bottlenecked at x1 bandwidths, this would limit performance for high-bandwidth applications such as gaming, and significant performance reductions would be observed. Since cryptocurrency mining does not require high bandwidth, no performance loss occurs here, as x1 bandwidth on PCI-e 2.0 or 3.0 is never maxed out. I had assumed that since mining does not require such high levels of bandwidth, perhaps distributed-computing projects might be the same. In the past few weeks, I've been discussing this over on the Folding@Home forums, and to my disappointment, anything less than PCI-e 3.0 x4 or PCI-e 2.0 x8 would result in bandwidth saturation, and thus a performance loss occurs due to the GPUs never reaching full load. This was rather disappointing for me, as I had wanted to build a system specced similarly to a mining rig, for the purposes of distributed-computing. After a bit of thinking, I started to wonder if every project would require the same levels of bandwidth as F@H, so here I am. With the lengthy backstory out of the way, my question to you guys is simply this: Are there any GPU projects on the BOINC platform that do not saturate the PCI-e x1 interface? I would love to get some data from anyone working on GPU projects. MSI Afterburner shows bus usage, so if a few people are willing to spend two or three minutes to take a few measurements, I would really appreciate it. Please let me know what size and version the PCI-e slot of your GPU is as well. This is a signature |
Send message Joined: 16 Dec 15 Posts: 15 |
Win 10 + GPU-Z 0.8.8 2500k + GTX760 + AMD 7970 The 760 is running Primegrid (Cuda PPS Sieve) and the 'Bus Interface Load ' appears to fluctuate between 1-3%. The 7970 running Collatz and POEM@home. GPU-Z does not present any bandwidth information. Both GPUs use custom app_config.xml files. The 7970 runs 2 POEM tasks at a time and the 760 runs 2 PPS Seive tasks. When the 7970 is on Collatz it runs 1 task at a time. Let me know if I can provide any other information. I will be traveling later this week, so forgive any delayed response. |
Send message Joined: 11 Sep 13 Posts: 57 |
It's a start. Thanks. From other BOINC project forums, as well as the F@H forums, it doesn't look too hopeful for the kind of system I wanted to plan out and build, but there are a lot of inconsistencies, so...the research continues! This is a signature |
Send message Joined: 5 Oct 06 Posts: 5129 |
I suggest that you think about the sort of work each project is doing, and how it fits into your interests and intended construction. GPUs - in the configuation you describe - are probably best suited to integer calculations based on a formula or algorithm. They are probably least suited to projects which search large quantities of pre-recorded data, and perform double-precision floating point arithmetic on it. You can see how bitcoin would fit into the first group. It shares those characteristics with, say, Collatz Conjecture and PrimeGrid. I don't know of any project which falls completely into the second group - Milkyway requires double-precision floating point arithmetic, but I think is doing algorithmic simulations, rather than data searches. Examples of middle-of-the-road single precision data searches include Einstein, GPUGrid, SETI. (edit - GPUGrid is perhaps better described as a simulation based on large volumes of input data - but it still needs that bandwidth) You can probably work out other contenders for yourself by reading down the Category column in Choosing BOINC projects. |
Send message Joined: 18 Oct 14 Posts: 1487 |
GPUZ shows my GTX660 running 3 Cuda 55 tasks at a time shows 3% buss saturation on a PCIe 16 * 2 buss. PCIe buss bandwidth is not a bottleneck there. |
Send message Joined: 11 Sep 13 Posts: 57 |
Hmm...And 3% on that slot would then equate to 24% on x1 3.0, or 48% on x1 2.0. But then again, a 660 is pretty outdated. Would be considered a low mid-end card by today's standards. Likely, I'll need to just go for a standard system, then. This is a signature |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.