Message boards : BOINC client : Multi core tasks alongside single core tasks.
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · Next
Author | Message |
---|---|
Send message Joined: 5 Oct 06 Posts: 5128 |
I've re-opened Dave's issue, to keep it in the public eye while we continue investigating. I think the next move will have to be moving on to the latest available artifact, just to confirm if it's still there. If it is - simulator. Edit - before doing that, I checked yesterday that the single amicable task was using three threads as directed - it was. CPU time was almost exactly three times elapsed time. Today, the second amicable task to start has clocked up 60 minutes CPU in 30 minutes elapsed - so it's starved of cycles. |
Send message Joined: 28 Jun 10 Posts: 2691 |
Now building from latest master |
Send message Joined: 5 Oct 06 Posts: 5128 |
Confirmed in the client emulator - Scenario 201, Simulation 1. Reported on GitHub. |
Send message Joined: 28 Jun 10 Posts: 2691 |
CPDN task survived the restart. Still getting the same behaviour with latest download of master. |
Send message Joined: 5 Oct 06 Posts: 5128 |
I see you've reminded David of the original problem in the issue - fair comment. It might be useful if you indicated exactly how you're increasing the core count while running - which preference are you tweaking? That can't be done in the emulator - all files, including prefs and prefs_override, are defined at the start and can't be changed. |
Send message Joined: 28 Jun 10 Posts: 2691 |
It might be useful if you indicated exactly how you're increasing the core count while running - which preference are you tweaking? That can't be done in the emulator - all files, including prefs and prefs_override, are defined at the start and can't be changed. Added to comment on issue. No project specific preferences are set. If I change %CPUs before downloading tasks the multithread tasks will use the number of available cores. but what I have been doing is increasing the number of cores form 40% to 50% via the manager but BOINC doesn't recognise this and behaves as if only six instead of 8 cores are available. Shutting down the client and restarting does not change this behaviour. eDIT: It seems as if once a multi thread task is running, it is impossible to change the number of cores available to BOINC. |
Send message Joined: 5 Oct 06 Posts: 5128 |
No project specific preferences are set. If I change %CPUs before downloading tasks the multithread tasks will use the number of available cores. but what I have been doing is increasing the number of cores form 40% to 50% via the manager but BOINC doesn't recognise this and behaves as if only six instead of 8 cores are available. Shutting down the client and restarting does not change this behaviour.My first amicable task has finished, and a single-core task has (re-)started in its place - as it should. That doesn't contradict Dave's first sentence above, but I'll have to try again to reproduce. That'll involve changing the CPU count when there isn't a spare amicable task to run - that's what went wrong last time. Sun 28 May 2023 15:41:05 BST | Amicable Numbers | Computation for task amicable_10_21_2426_1685252702.324398_984_1 finished Sun 28 May 2023 15:41:05 BST | NumberFields@home | [cpu_sched] Restarting task wu_sf3_DS-16x271-21_Grp883275of1000000_0 using GetDecics version 400 (default) in slot 3With regard to the edit: yes, that's my observation too. It might be possible by changing client_state.xml, but I suspect you's have to delve into the slot directory as well. And I think I've found another one - whenever new work is fetched, the Event Log shows that the client re-reads the preferences override file, and prints all the settings. I think that's new - it's normally only read and displayed on request, or when a change is made. |
Send message Joined: 28 Jun 10 Posts: 2691 |
And I think I've found another one - whenever new work is fetched, the Event Log shows that the client re-reads the preferences override file, and prints all the settings. I think that's new - it's normally only read and displayed on request, or when a change is made. For completeness, I should probably add that doing a <Read local preferences> from the options menu made no difference. |
Send message Joined: 5 Oct 06 Posts: 5128 |
See https://github.com/BOINC/boinc/issues/5255 - I've changed my analysis from that initial comment. It only happens when the client contacts Einstein, but it happens every time, not just when work is being requested. |
Send message Joined: 5 Oct 06 Posts: 5128 |
It snagged another one while I was away from the machine. I tried to follow your problem exactly, but got Sun 28 May 2023 19:20:22 BST | | Reading preferences override file Sun 28 May 2023 19:20:22 BST | | Number of usable CPUs has changed from 5 to 6. Sun 28 May 2023 19:20:22 BST | | [cpu_sched_debug] Request CPU reschedule: Number of usable CPUs has changed Sun 28 May 2023 19:20:22 BST | Einstein@Home | [cpu_sched_debug] reserving 1.000000 of coproc NVIDIA Sun 28 May 2023 19:20:22 BST | Einstein@Home | [cpu_sched_debug] add to run list: LATeah4021L08_1132.0_0_0.0_1708623_1 (NVIDIA GPU, FIFO) (prio -2.979101) Sun 28 May 2023 19:20:22 BST | Einstein@Home | [cpu_sched_debug] reserving 1.000000 of coproc NVIDIA Sun 28 May 2023 19:20:22 BST | Einstein@Home | [cpu_sched_debug] add to run list: LATeah4021L08_1124.0_0_0.0_15781647_0 (NVIDIA GPU, FIFO) (prio -3.041361) Sun 28 May 2023 19:20:22 BST | Amicable Numbers | [cpu_sched_debug] add to run list: amicable_10_21_13911_1685282102.567599_62_0 (CPU, FIFO) (prio -0.001657) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] add to run list: wu_sf3_DS-16x271-21_Grp898638of1000000_0 (CPU, FIFO) (prio -0.019242) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] add to run list: wu_sf3_DS-16x271-21_Grp898642of1000000_0 (CPU, FIFO) (prio -0.019362) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] add to run list: wu_sf3_DS-16x271-21_Grp898879of1000000_0 (CPU, FIFO) (prio -0.019482) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] add to run list: wu_sf3_DS-16x271-21_Grp898643of1000000_0 (CPU, FIFO) (prio -0.019603) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] add to run list: wu_sf3_DS-16x271-21_Grp898644of1000000_0 (CPU, FIFO) (prio -0.019723) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] add to run list: wu_sf3_DS-16x271-21_Grp906352of1000000_0 (CPU, FIFO) (prio -0.019843) Sun 28 May 2023 19:20:22 BST | | [cpu_sched_debug] final job list: Sun 28 May 2023 19:20:22 BST | Einstein@Home | [cpu_sched_debug] 0: LATeah4021L08_1132.0_0_0.0_1708623_1 (MD: no; UTS: yes) Sun 28 May 2023 19:20:22 BST | Einstein@Home | [cpu_sched_debug] 1: LATeah4021L08_1124.0_0_0.0_15781647_0 (MD: no; UTS: yes) Sun 28 May 2023 19:20:22 BST | Amicable Numbers | [cpu_sched_debug] 2: amicable_10_21_13911_1685282102.567599_62_0 (MD: no; UTS: yes) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] 3: wu_sf3_DS-16x271-21_Grp898638of1000000_0 (MD: no; UTS: no) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] 4: wu_sf3_DS-16x271-21_Grp898642of1000000_0 (MD: no; UTS: no) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] 5: wu_sf3_DS-16x271-21_Grp898879of1000000_0 (MD: no; UTS: no) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] 6: wu_sf3_DS-16x271-21_Grp898643of1000000_0 (MD: no; UTS: no) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] 7: wu_sf3_DS-16x271-21_Grp898644of1000000_0 (MD: no; UTS: no) Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] 8: wu_sf3_DS-16x271-21_Grp906352of1000000_0 (MD: no; UTS: no) Sun 28 May 2023 19:20:22 BST | Einstein@Home | [cpu_sched_debug] scheduling LATeah4021L08_1132.0_0_0.0_1708623_1 Sun 28 May 2023 19:20:22 BST | Einstein@Home | [cpu_sched_debug] scheduling LATeah4021L08_1124.0_0_0.0_15781647_0 Sun 28 May 2023 19:20:22 BST | Amicable Numbers | [cpu_sched_debug] scheduling amicable_10_21_13911_1685282102.567599_62_0 Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] scheduling wu_sf3_DS-16x271-21_Grp898638of1000000_0 Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched_debug] all CPUs used (6.00 >= 6), skipping wu_sf3_DS-16x271-21_Grp898642of1000000_0 Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched] Resuming wu_sf3_DS-16x271-21_Grp898638of1000000_0 Sun 28 May 2023 19:20:22 BST | NumberFields@home | [cpu_sched] Resuming task wu_sf3_DS-16x271-21_Grp898638of1000000_0 using GetDecics version 400 (default) in slot 3That's as it should be, and doesn't repro yours. So what's the difference? |
Send message Joined: 28 Jun 10 Posts: 2691 |
That's as it should be, and doesn't repro yours. So what's the difference? Don't know. I have only tested with AN for the multi-threaded tasks and CPDN for the single core task. Maybe I should try another project to see if it something Specific to Amical Numbers just as the re-reading local preferences is specific to one project? |
Send message Joined: 28 Jun 10 Posts: 2691 |
Do you know if I need the cuda drivers for the Nvidia tasks from Einstein? With the driver I have installed, they just crash as do the amicable numbers GPU tasks. |
Send message Joined: 5 Oct 06 Posts: 5128 |
Do you know if I need the cuda drivers for the Nvidia tasks from Einstein? With the driver I have installed, they just crash as do the amicable numbers GPU tasks.No, I think Einstein is exclusively an OpenCL project. But you do need those drivers, included in the standard NV package. Mine has a 'NVIDIA driver metapackage', described in Synaptic as 'nvidia-driver-470' (older and newer numbers are available) - that one works and is good enough. NVidia enthusiasts discuss driver versions endlessly. |
Send message Joined: 28 Jun 10 Posts: 2691 |
Downloaded opencl. I will see what happens next. |
Send message Joined: 5 Oct 06 Posts: 5128 |
See edit below. |
Send message Joined: 28 Jun 10 Posts: 2691 |
Mine has a 'NVIDIA driver metapackage', described in Synaptic as 'nvidia-driver-470' (older and newer numbers are available) - that one works and is good enough. NVidia enthusiasts discuss driver versions endlessly.470 is what I am using. I will have another look after treating my morning hypocaffeinia tomorrow. |
Send message Joined: 28 Jun 10 Posts: 2691 |
aaargh! I did a reboot and lost the CPDN task. 5coreplus GPU GW task fell over but one core plus GPU Gamma Ray seems to be crunching fine. Maybe some of the dodgy ones still out there from the batch you posted about earlier in the year. Not too worried about the task as the batch is from almost a year ago so not convinced it would have gotten looked at anyway. When current batch of Einstein run down I shall try again with Amicable and see if the same happens with Einstein single core tasks as did with the CPDN one. If it does then next stage is to find a non Amicable multi-threaded task that will run on my box. I don't know which other projects have them without GPU? Will have to do some searching. Edit: Not the same error as the ones you reported. |
Send message Joined: 2 Feb 22 Posts: 84 |
... next stage is to find a non Amicable multi-threaded task that will run on my box. I don't know which other projects have them without GPU? ATLAS from LHC@home. But it is out of work for a couple of days. From the BOINC perspective CMS from https://lhcathomedev.cern.ch/lhcathome-dev/apps.php is an mt vbox app. The scientific app inside the VM runs as singlecore but this shouldn't matter here. It may just result in a misleading walltime/CPU-time ratio. |
Send message Joined: 5 Oct 06 Posts: 5128 |
LHC also has a dev site with a 'native-mt' (non-Vbox) ATLAS app, though I think it still needs a bespoke comms layer adding in Linux. Their dev site doesn't routinely have work for that app, but the team were most helpful in generating a daily small batch of quick-running test data while I was testing changes to the MT server code prior to release of v1.4.2. You also need an invitation code to join the dev project. I can give you contact details if you want to reach out to them. |
Send message Joined: 28 Jun 10 Posts: 2691 |
Thanks Richard, did you try and replicate what I had with an Amicable numbers multi-threaded task or just with Einstein? |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.