Thread 'Two projects on one GPU?'

Message boards : GPUs : Two projects on one GPU?
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5128
United Kingdom
Message 93469 - Posted: 31 Oct 2019, 18:43:49 UTC - in response to Message 93468.  

Dunno. You could be the first to try it!

You'd need two app_config.xml files - one for each project - both with a <gpu_usage> value of 0.5 and a <max_concurrent> of 1. I'd suggest you avoid fetching CPU tasks from either project while you test.
ID: 93469 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5128
United Kingdom
Message 93473 - Posted: 31 Oct 2019, 19:39:36 UTC - in response to Message 93471.  

That reminds me... BOINC v7.14.2 doesn't handle <max_concurrent> very well. You might find count (2) might work better under v7.16.3

For count (1) - I can't speak for ATI tasks, but some Einstein tasks are very much boosted by running at a greatly enhanced process priority. The machine I'm typing on has Einstein's intel_gpu app running at real time priority under Process Lasso. I notice a brief stutter each time a task finishes and another starts, but at once every five hours that's not a hardship. Use that factoid with care and at your own risk.
ID: 93473 · Report as offensive
ProfileJoseph Stateson
Volunteer tester
Avatar

Send message
Joined: 27 Jun 08
Posts: 641
United States
Message 93507 - Posted: 4 Nov 2019, 2:24:42 UTC - in response to Message 93478.  


A long time ago (I think on a Radeon HD 290) I used to run more than one (about 3) Milkyway tasks at once (just using the one client). I got a reasonable speed increase. But now I don't, the GPU is already running at about 95% anyway with just one task. Either Milkyway has changed, or this card is different.



not sure when but a few years ago milkyway started doubling up the number of work units each job has. looking in a result file one finds
<number_WUs> 4 </number_WUs>
so currently each job is 4 simple work units
ID: 93507 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 93670 - Posted: 12 Nov 2019, 22:57:32 UTC
Last modified: 12 Nov 2019, 22:57:49 UTC

I would only suspect that the double precision part of the GPU, can be utilized to process single or double precision.
A GPU doesn't have separate circuitry for processing single or double precision, which means the double precision part will need to be swapped frequently, causing a lot of data to be moved around.

Meaning if you'd be running a double precision task 100% efficient, half precision would need to share those resources of the GPU, causing an increase in PCIE latencies, potentially increase in energy consumption, and a lower performance.
ID: 93670 · Report as offensive
ProfileKeith Myers
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 17 Nov 16
Posts: 890
United States
Message 93673 - Posted: 13 Nov 2019, 1:25:32 UTC - in response to Message 93671.  

AnandTech is always the best source for high level analysis of new cpu or gpu architectures with good block diagrams and really knowledgeable analysis of the design by writers like Dr. Ian Cutress for cpus and Anton Shilov, Ryan Smith and Nate Oh for gpus.
ID: 93673 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 93674 - Posted: 13 Nov 2019, 2:13:19 UTC - in response to Message 93671.  

I would only suspect that the double precision part of the GPU, can be utilized to process single or double precision.
A GPU doesn't have separate circuitry for processing single or double precision, which means the double precision part will need to be swapped frequently, causing a lot of data to be moved around.

Meaning if you'd be running a double precision task 100% efficient, half precision would need to share those resources of the GPU, causing an increase in PCIE latencies, potentially increase in energy consumption, and a lower performance.


I don't know enough about the insides of GPUs to disagree, but that doesn't make sense. Since there are GPUs available with completely different ratios of double and single speed, I always thought that they were independant units. I'd love to see some designs of GPUs (even just block diagrams).


double precision or single precision software has different benchmark scores, when ran at double precision capable hardware. That's just the way double and single precision work.

However, if you're sharing that same hardware with 2 different tasks, is like running 2 operating systems on one CPU.

There'll be a lot of overhead data switching back and forth between hardware and hardware.
On remote terminals you probably won't notice this much, due to the fact that a CPU does a lot of the swapping in idle CPU moments.
However, when folding/crunching, a CPU's utilization is nearly constantly at 100%. There's no idle time to swap between tasks, so primary tasks need to be shut down, and caches need to be flushed, to load the secondary task.
I would say you'll probably lose somewhere between 15-25%, compared to running the tasks independently.
ID: 93674 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15560
Netherlands
Message 93688 - Posted: 13 Nov 2019, 16:06:17 UTC - in response to Message 93670.  

A GPU doesn't have separate circuitry for processing single or double precision
Single precision (32bit) and double precision (64bit) are types of floating point calculations. Double precision calculations can store a wider range of values with more precision. Both are calculated using the same floating point unit on the GPU, there's no data being moved around. Science applications are either single precision (most projects) or double precision. They're never both at the same time.
ID: 93688 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 93826 - Posted: 22 Nov 2019, 10:21:46 UTC

I have the same question.
I'm running an RTX2060, with an Einstein@home CPU+GPU task.
The GPU part only taxes 80W, or 50% of my GPU.
I added my apps_config.xml file in the folder with this content:

<app_config>
   [<app>
      <name>Einstein@home</name>
      <max_concurrent>2</max_concurrent>
      <gpu_versions>
          <gpu_usage>0.5</gpu_usage>
      </gpu_versions>
    </app>]
</app_config>


however, now I'm seeing only 40Watts usage.

So I changed the <gpu_usage>0.5</gpu_usage> value to 1, but without success.

Any help is greatly appreciated.
ID: 93826 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15560
Netherlands
Message 93828 - Posted: 22 Nov 2019, 11:09:50 UTC - in response to Message 93826.  

At Einstein you can change how many tasks you want to run on the GPU via the project preferences. Change it there.
ID: 93828 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5128
United Kingdom
Message 93834 - Posted: 22 Nov 2019, 12:19:15 UTC - in response to Message 93826.  

<app_config>
   [<app>
      <name>Einstein@home</name>
      <max_concurrent>2</max_concurrent>
      <gpu_versions>
          <gpu_usage>0.5</gpu_usage>
      </gpu_versions>
    </app>]
</app_config>
Remove the square brackets round <app></app> - they are used in programming manuals to indicate optional sections.
ID: 93834 · Report as offensive
ProfileDave
Help desk expert

Send message
Joined: 28 Jun 10
Posts: 2691
United Kingdom
Message 93841 - Posted: 22 Nov 2019, 15:52:29 UTC - in response to Message 93840.  

It amazes me we are editing config files in the 21st century. Come on, this isn't DOS anymore. Why isn't all this in the GUI?


ACHTUNG!
ALLES TURISTEN UND NONTEKNISCHEN LOOKENSPEEPERS!
DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKEN.
IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS.
ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN.


So those who know not that with which they play don't screw things up is I suspect the reason. Deliberate policy rather than laziness etc.
ID: 93841 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15560
Netherlands
Message 93843 - Posted: 22 Nov 2019, 16:43:37 UTC - in response to Message 93842.  

And since the settings are then GUI based, it's impossible to put in a stupid value.
Try enabling all debug flags in the Event Log Options window and you'll find that having such things available through the GUI isn't always a good thing. I won't tell you what'll happen... Just try it. :-)
ID: 93843 · Report as offensive
1 · 2 · Next

Message boards : GPUs : Two projects on one GPU?

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.