Thread 'Two projects on one GPU?'

Message boards : GPUs : Two projects on one GPU?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15560
Netherlands
Message 93846 - Posted: 22 Nov 2019, 18:06:56 UTC - in response to Message 93844.  

And if you do screw up, you can just go back and click "set defaults for this page" or something.
Not in the case of the debug flags, because your BOINC Manager won't connect to the client anymore due to the absolute huge amount of RPC traffic it will spit out. Here your only options are to manually edit cc_config.xml or remove it. The options menu won't warn you about this either.

In the case of the app_info.xml file (which is for using the anonymous platform and thus never available via the GUI and it requires a client restart) and app_config.xml file it's probably best to do it by hand. Or have a separate GUI for either, but best not have it included in the main program or BOINC Manager.
ID: 93846 · Report as offensive
Les Bayliss
Help desk expert

Send message
Joined: 25 Nov 05
Posts: 1654
Australia
Message 93849 - Posted: 22 Nov 2019, 19:24:59 UTC

Try doing what Jord said, and enabling all debug flags.
It's great fun, keep you busy for hours.
ID: 93849 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15560
Netherlands
Message 93852 - Posted: 22 Nov 2019, 19:55:29 UTC - in response to Message 93847.  

Well most programs don't need text files editing. The GUI is set up so you can't kill the program entirely just by playing with options. You get a range for each value that makes sense etc.
You get a range for the debug flags as well, either 1 for on, or 0 for off.
It's entirely possible to run the client with all the debug flags on. The client will run fine. Just don't expect the Manager to be able to keep up with remote procedure calls, as these go by something of 1,500-2,000 per second - while normal operations have a maximum of 1,000 per second.

Most programs don't download separate programs that run intricate calculations of data.
The trouble with automating or GUI-fying app_config.xml is that you still need information read from the client_state.xml file -> the application name. While a GUI may have an easier time reading the application name from client_state.xml, you'd have to add enough intelligence that it grabs the correct app name for the correct project, or shows all app names for all projects. And then you can still make screw ups.

As an aside I see that ProDigit filled in Einstein@Home as the application name, which won't work either. These things are for really Advanced users only, and then preferably those that know where to find the documentation by heart. And who know that they best make backups, disable their internet connection, exit the client between all editing and tinkering etc.

BOINC shouldn't be easy for everyone. That's why it has the Simple GUI first, Advanced View second.
But if you think that's outdated, no one will keep you from adding code that does exactly what you want. If you expect someone else to do it though, do know he's still quite busy.
ID: 93852 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 93856 - Posted: 22 Nov 2019, 21:35:52 UTC

The funny thing is, that on an RTX 2060, Einstein uses 50% of the resources ~80W of 160W).
The 2060 also has about 1920 shaders or Cuda Cores.
So I thought, perhaps that's just how their tasks are designed.
So I'll run it from one of my lower end GPUs. Meaning, running it from a GT 1030 (max TDP 30W), 384 cores, it should keep the load high.
But no. My GT 1030s are loaded 30-40%

Something on Einstein's end has to change. THey have to increase the amount of GPU utilization from there.
ID: 93856 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 93869 - Posted: 23 Nov 2019, 7:56:53 UTC - in response to Message 93862.  

The funny thing is, that on an RTX 2060, Einstein uses 50% of the resources ~80W of 160W).
The 2060 also has about 1920 shaders or Cuda Cores.
So I thought, perhaps that's just how their tasks are designed.
So I'll run it from one of my lower end GPUs. Meaning, running it from a GT 1030 (max TDP 30W), 384 cores, it should keep the load high.
But no. My GT 1030s are loaded 30-40%

Something on Einstein's end has to change. THey have to increase the amount of GPU utilization from there.


Are you just going by power usage or are you looking at GPU load? Use GPU-Z to find the actual GPU load, the GPU memory load, etc, etc. Einstein may be maxing out one part of your graphics card and not the others.

Also, is your CPU limiting it? Einstein needs to use a CPU core in conjunction with the GPU. My RX 560 GPU for example, along with my i5 8600K CPU, requires 3/4s of a CPU core to assist the GPU. If I max out the CPU with CPU projects, the GPU slows right down. I have 6 cores on the CPU, I told Boinc to use only 4. That way there's 1 left for GPU assist, and 1 for Windows background stuff. I make sure that in Windows task manager, the CPU never quite hits 100%.

It appears for at least Einstein (and most GPU projects I've crunched for on Boinc) this statement needs adjusting.
When I select 50% of my CPU utilization (1 of 2 cores), I can clearly see that my CPU is utilized by 100% on the first core, and 7-12% on the second core.
The second core is now feeding both Intel and Nvidia projects together.

The statement of 1 CPU core per GPU, seems not to hold true anymore for Nvidia cards.
For Folding perhaps yes, but for Boinc, no.
ID: 93869 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1301
United Kingdom
Message 93871 - Posted: 23 Nov 2019, 8:09:37 UTC

BOINC does NO crunching - it is ONLY the applications sent to you by the projects that do any processing at all.
BOINC provides an environment for the project applications to run, it does manage (loosely) the communication between you and the projects. Each project is responsible for its own applications, and each application has different requirements for CPU support of GPUs, and some need far more than others, so the general advise is set aside one CPU core for every concurrent GPU application running "just in case".

Most of the tools we use to monitor the CPU &GPU use take far too long to do their measurements as there are ties when a GPU application will require very large amounts of CPU support, but only for a very short period of time, but this will appear to be, say, 10% CPU usage averaged over the measuring period. But if you have that CPU core bound up doing something else there is an overhead of unloading the running job, loading the pending job, then swapping back again - all of which takes time. It is a balancing act, so you have to determine which way is actually better for you, on your system, and with your mix of projects, and that will take time to do - days or even weeks of monitoring in one configuration, watching processing times, then doing it all again on the next configuration - again for days or weeks.
ID: 93871 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15560
Netherlands
Message 93875 - Posted: 23 Nov 2019, 10:10:23 UTC - in response to Message 93869.  

I've seen you make statements towards the load of your GPU now a couple of times. You cannot compare the load of the CPU with that of the GPU, other than that both use a science application that runs on the CPU. For calculations done on the GPU, data from a task needs to be translated into kernels that the GPU can run. That's done by the CPU. A lot of data in the tasks is too difficult to translate into kernels and is therefore run on the CPU, not the GPU.
ID: 93875 · Report as offensive
Profilebcavnaugh
Avatar

Send message
Joined: 2 Jul 14
Posts: 17
United States
Message 93886 - Posted: 23 Nov 2019, 18:53:35 UTC - in response to Message 93826.  
Last modified: 23 Nov 2019, 19:03:36 UTC

I have the same question.
I'm running an RTX2060, with an Einstein@home CPU+GPU task.
The GPU part only taxes 80W, or 50% of my GPU.
I added my apps_config.xml file in the folder with this content:

<app_config>
   [<app>
      <name>Einstein@home</name>
      <max_concurrent>2</max_concurrent>
      <gpu_versions>
          <gpu_usage>0.5</gpu_usage>
      </gpu_versions>
    </app>]
</app_config>


however, now I'm seeing only 40Watts usage.

So I changed the <gpu_usage>0.5</gpu_usage> value to 1, but without success.

Any help is greatly appreciated.

Should be more like the below:
<app_config>
     <app>
         <name>hsgamma_FGRPB1G</name>
     <gpu_versions>
         <gpu_usage>0.5</gpu_usage>
         <cpu_usage>0.2</cpu_usage>
      </gpu_versions>
    </app>
    <app>
       <name>hsgamma_BRP4G</name>
    <gpu_versions>
        <gpu_usage>0.5</gpu_usage>
        <cpu_usage>0.2</cpu_usage>
    </gpu_versions>
   </app>
</app_config>


Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community.
ID: 93886 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1301
United Kingdom
Message 93889 - Posted: 23 Nov 2019, 20:40:27 UTC

I have the same question.
I'm running an RTX2060, with an Einstein@home CPU+GPU task.
The GPU part only taxes 80W, or 50% of my GPU.


This only means that whatever par of the GPU being used is drawing 80W, it does not mean that the application is only using 50% of the GPU's capacity - it could be using 100% of a part of the GPU that only draws 50% of the power. Power draw is an extremely poor metric for "amount of GPU in use" to use when performing calculations.
ID: 93889 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 93920 - Posted: 27 Nov 2019, 3:14:01 UTC

No, the power usage is a great way to see how much of the GPU is being used.
GPU utilization is well below 100% (well below 80% even).
ID: 93920 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 93929 - Posted: 27 Nov 2019, 23:05:42 UTC - in response to Message 93921.  
Last modified: 27 Nov 2019, 23:07:59 UTC

No, the power usage is a great way to see how much of the GPU is being used.
GPU utilization is well below 100% (well below 80% even).


As Rob Smith has already told you, that's incorrect. A GPU is made up of many parts. One could be maxed out while the other is idle. If the processor is thinking hard, but there's no memory access, the power won't be 100%. Power will rarely hit 100%, that would need every single part of the card to be maxed out at once, which is very unlikely.


A GPU has cores and ram.
GPU utilization depends on how many cores are processing, and that's the main readout on Watt and GPU utilization.
Any additional parts of the GPU (Tensor/RT Cores), only make out a tiny part of the GPU utilization.
Tensor cores on Nvidia make out 1/8th, or 12% of the CUDA cores, and RT cores only 1,5%.
These cores operate at 16 or 8 bit (half or quarter precision), and therefor would make up less than 1/2 to 1/4th of CUDA core performance, so a GPU running at 90-95% can be considered fully utilized.

Running 2 projects into 1 GPU, usually would not result in 'one piece of the GPU being under 100% load, while the rest is idle'.
Running 2 projects into 1 GPU usually means more CUDA cores/shaders will be active; with as benefit that 2 projects can be worked on at a time; with as cost, a potential small slowdown per task.

The only time when a a GPU can have lower output like that, is when both projects depend on tensor or RT cores.
However, with only 50% of Cuda cores being used, the project should use the cuda cores instead (even if they only need half or quarter precision from Tensor/RT cores).

It would not be a very efficient form of coding, if the entire task depends on the amount of Tensor cores (seeing they are 1/8th the amount of CUda cores, at least on Nvidia).
ID: 93929 · Report as offensive
Previous · 1 · 2

Message boards : GPUs : Two projects on one GPU?

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.