Message boards : GPUs : Some more advanced GPU stuff (Nvidia Powercapping) Linux
Message board moderation
Author | Message |
---|---|
Send message Joined: 8 Nov 19 Posts: 718 |
So, I finally managed to run Boinc on my Nvidia RTX 2060 and Intel IGP, as well as on the CPU. I'm extatic! See this thread I also have forced up to 3 projects per GPU from Einstein, by creating the following app_config.xml file inside the project: <app_config> <app> <name>Einstein@Home</name> <max_concurrent>3</max_concurrent> <gpu_versions> <gpu_usage>.25</gpu_usage> </gpu_versions> </app> <app_version> <ngpus>2</ngpus> </app_version> <project_max_concurrent>5</project_max_concurrent> </app_config> Seems like the above line was just phantom (probably a better task was loaded right after I edited the config.xml file. There's still only 1 GPU project active per GPU in Boinc... Are these config files (both cc and app) really working, like on the manual??? My next issue is power capping. For Folding, I can load 1 project (or WU they call it) into the GPU, and change stock power levels (170W) to power capped (125W) and overclocked. This offers me 98% of the PPD (points), at 73% of the GPU's power consumption. My problem with boinc is, that as soon as I even touch the power cap, boinc dials back on the GPU utilization, to a near stall (40-60Watts on an RTX). The reason for capping the power, is so that I can achieve the same lower power consumption at nearly the same performance as with folding. I currently have the choice to either run 1 project on a GPU, at 40% utilization, with 1350Mhz load, OR, Run multiple projects with 60-90% load, at 1935-2010Mhz (once the load exceeds 50%, Nvidia GPUs run at their full boost speeds). This greatly accelerates projects, but at a higher power consumption. |
Send message Joined: 25 May 09 Posts: 1301 |
BOINC does not dial back the performance of your laptop, it is either the operating system, or the laptop's BIOS that does. A lot of people say that running the IGPU is a waste of power, as it shares so many resources with the CPU that it drags the CPU performance down and just heats everything up. From what you've described your RTX is working hard to protect itself from some internal power limit that is not revealed. Setting a low power limit will cause all sorts of "strange" things to happen. Do not attempt to compare performance between projects unless you know that the two projects use EXACTLY the same mix of operations, and EXACTLY the same processor, memory and communications processes. Very small changes in any of these can have a very large impact on the performance of a processor, and as each project has different aims and objectives the mix for each project will be very different. |
Send message Joined: 29 Aug 05 Posts: 15560 |
Please read the documentation on how to use the app_config.xml file: https://boinc.berkeley.edu/wiki/Client_configuration#Project-level_configuration. The name and app_name are NOT Einstein@Home, but the application name found in the client_state.xml file for those tasks.<app_config> <app> <name>Einstein@Home</name> <max_concurrent>3</max_concurrent> <gpu_versions> <gpu_usage>.25</gpu_usage> </gpu_versions> </app> <app_version> <ngpus>2</ngpus> </app_version> <project_max_concurrent>5</project_max_concurrent> </app_config> |
Send message Joined: 8 Nov 19 Posts: 718 |
Please read the documentation on how to use the app_config.xml file: https://boinc.berkeley.edu/wiki/Client_configuration#Project-level_configuration. The name and app_name are NOT Einstein@Home, but the application name found in the client_state.xml file for those tasks.<app_config> <app> <name>Einstein@Home</name> <max_concurrent>3</max_concurrent> <gpu_versions> <gpu_usage>.25</gpu_usage> </gpu_versions> </app> <app_version> <ngpus>2</ngpus> </app_version> <project_max_concurrent>5</project_max_concurrent> </app_config> Tanks! Would it still work if I remove the <name> ***</name> tag? |
Send message Joined: 8 Nov 19 Posts: 718 |
BOINC does not dial back the performance of your laptop, it is either the operating system, or the laptop's BIOS that does. I'm running a multi GPU desktop, not a laptop. My bios certainly isn't cutting down on GPU utilization. In FAH I can set the GPU to 125Watts, and it would operate within 2 watts of the limit usually. And yes, for the nitpickers, projects using boinc. I categorize them all under 'boinc'. Boinc GPU tasks seem to assign smaller tasks that do not fully utilize the GPU. Also, the Celeron G4900 is rated at around 200Gflops of 32B FPP. It's Intel UHD 610 is rated exactly the same. 200Gflops. The Celeron is pushing 2x Nvidia GPUs right now, with a third thread pushing the Intel IGP. Just pusing 2x Nvidia GPUs would only give about 1/2 of a core available for CPU crunching (50Gflops). Now, the 12 GPU cores running at 1050Mhz of the Intel UHD 610, is processing the work the CPU could be doing, while the CPU is pusing all 3 GPUs in Linux. I think I pretty much maxed out that GPU. I'll be replacing it with an Intel Core i5 7400 I've got laying around. That one has 24 shaders instead of 12, so doing the same should almost double the output, plus leave 1 CPU core for CPU crunching.. Also, it's not the projects that fully use a CPU core per Nvidia GPU. It's Nvidia drivers that do this. The projects just don't support this feature, so CPU cores are often not fully utilized. In some cases, 2 GPUs could be crunching, using only 10% of CPU resources per GPU per core. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.