Message boards : API : Multi-threaded app as single-threaded?
Message board moderation
Author | Message |
---|---|
Send message Joined: 29 May 07 Posts: 11 |
Hi, I'm converting an existing multi-threaded app for BOINC, which also happens to be my first BOINC app. I found boinc_init_parallel(), and so far I have it set up that way. But the project I'm converting it for may not want the app multi-threaded. I also gather that a multi-threaded app tends not to allow other projects, or other sub-projects of the same project to process WUs at the same time? As far as memory and other resources go, there would be no problem with running multiple copies of the app, each with a single thread. However, the app currently does all actual work in worker threads separate from the main thread. My question is, how much would I have to do to allow the app to run with a simple boinc_init()? Is simply limiting the number of worker threads to one enough? Or would I need to go in and rip out all the pthreads stuff? Thanks! |
Send message Joined: 17 Aug 09 Posts: 19 |
Bump. I need to figure out how to force the client to pick up only 1 WU per machine, not per CPU core. Maybe application planning will be helpful here? |
Send message Joined: 29 Aug 05 Posts: 15561 |
Use the Project Options, more specifically these: <max_wus_in_progress> N </max_wus_in_progress> <max_wus_in_progress_gpu> M </max_wus_in_progress_gpu> (You have read through Application Multi-Threading I hope?) |
Send message Joined: 17 Aug 09 Posts: 19 |
I have in deed read through that section. The problem here is that setting <max_wus_in_progress> in config.xml is project specific. I need to be able to define this on a per application basis. |
Send message Joined: 29 Aug 05 Posts: 15561 |
I've forwarded your question to the developers, as I don't know if the present back-end can do that. |
Send message Joined: 10 Sep 05 Posts: 726 |
The latest BOINC server code provides for per-application limits on in-progress jobs, in-progress GPU jobs, and jobs returned per RPC. However, this hasn't been tested and the implementation may not be finished. If anyone really needs it, convince me to finish it. -- David |
Send message Joined: 17 Aug 09 Posts: 19 |
Ok David, here goes!.... We have a 1600 Node Windows based DG here at Univ of Westminster (UOW), which runs applications for EDGeS at home. Some of the applications (not all, hence the per project comment of my earlier post) are Linux specific, and hence we're looking at a solution using virtual machines. Much literature exists on the suitability of different VM technologies for solving this problem. I have experimented with VMWare, VirtualBox and QEMU/KQEMU, and have come to the conclusion that VirtualBox is a preferable solution for our DG, and probably for further Boinc integration for several reasons. Firstly, VMWare cannot be run headless; we have a configuration here where WU's are suspended when a student interacts with a DG node, and they should not see the virtual machine (VM) display of any kind. The KQEMU driver for Windows is now obsolete, and hence for any kind of performance on an x86 architecture QEMU is not suitable. Which leaves VirtualBox.... VirtualBox has some nice features, namely a VM control interface (VBoxManage.exe which is a wrapper around a COM interface), and the capacity to share directories between the Guest and Host machine. I have constructed a Boinc C based application which uses win32api CreateProcess functions (like the wrapper) to launch, control (suspend/resume) and poll the VM, as well as transfering the jobs outputs (through the afore mentioned shared directory) when the WU is finished. Now comes the reason for limits on per application simultaneous WUs.... When we have multiple VM's running simultaneously, i.e. 1 per CPU core, things get tricky due to the way VirtualBox organizes the VM configuration in xml files. It has one xml file shared amongst all VMs, and hence running simultaneous WU's would require WU's having transactional access to this file. If I can restrict the client to pulling one WU at a time this tricky constraint no longer applies. The VM can instead be instantiated with as many virtual CPU's as there are physical cores, so the node is still fully utilized. Thoughts!!? Chris |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.