Caveat: Change in WU SIZE = Change in WU CACHE

Message boards : BOINC Manager : Caveat: Change in WU SIZE = Change in WU CACHE
Message board moderation

To post messages, you must log in.

AuthorMessage
itenginerd
Avatar

Send message
Joined: 6 Sep 05
Posts: 27
United States
Message 445 - Posted: 19 Sep 2005, 15:15:59 UTC

Interesting issue. LHC has started tossing out a few monster units that take ~2x normal time to crunch.

Running 4.72, so it uses the time estimate correction code. Net result is that crunching one of these large WUs causes all WUs from that project to have nearly doubled time estimates. Which in turn affects how many WUs the client pulls down for cache.

Not really a problem, per se, but definitely somethig to be aware of if BOINC projects start putting out WUs of different size. You thought people complained about how big their cache was before you added the time correction factor. 8)

(j)
James
ID: 445 · Report as offensive
eberndl

Send message
Joined: 19 Sep 05
Posts: 1
Message 448 - Posted: 19 Sep 2005, 17:54:12 UTC - in response to Message 445.  

Interesting issue. LHC has started tossing out a few monster units that take ~2x normal time to crunch.

Running 4.72, so it uses the time estimate correction code...


Would this also be an issue for protein predictor? It has 2 (or is it 3?) different applications that it can use to process different units (Mfold, charmm, etc) Is the estimate by project or by program?
ID: 448 · Report as offensive
itenginerd
Avatar

Send message
Joined: 6 Sep 05
Posts: 27
United States
Message 450 - Posted: 19 Sep 2005, 18:18:28 UTC - in response to Message 448.  

Would this also be an issue for protein predictor? It has 2 (or is it 3?) different applications that it can use to process different units (Mfold, charmm, etc) Is the estimate by project or by program?


IMO, it'll be an issue for all projects (that's why I put it here, anyone who's wondering).

Now that you ask, though, I'm looking across accounts on different projects. I thought the info was kept once across all BOINC projects. Now that I'm digging a little deeper, it looks like the data is kept on a per-host basis. Since the host information is kept on a per-project basis, then the time correction factor will be kept on a per-project basis.

All that said, the same problem may exist. If a Mfold unit in 4.45 estimates that it will take you 10 hours, and it takes 1, then you'll get a corresponding "estimate adjustment %". If a Charmm unit takes you 25% of BOINC-estimated time, then you may run into the same kind of problem.

LHC only uses one executable, and I'm not in PPAH, so I can't test. Wouldn't be a bad idea to give it a try and see what happens. Like I said before, it really only amounts to a nuisance that alters the size of one's cache. It doesn't affect how much work you actually do.

(j)
James
ID: 450 · Report as offensive
Bill Michael

Send message
Joined: 30 Aug 05
Posts: 297
Message 453 - Posted: 19 Sep 2005, 19:33:05 UTC - in response to Message 448.  

Would this also be an issue for protein predictor? It has 2 (or is it 3?) different applications that it can use to process different units (Mfold, charmm, etc) Is the estimate by project or by program?


Predictor doesn't support the correction factor yet... as far as I know, the factor is per-project though, not per-application-within-a-project... however, remember, it's a correction factor to their estimate - if they estimate an Mfold WU at 2 hours, for example, and a charmm WU at 3 hours, and your correction factor is .5, you'll see 1 hour on the Mfold and 1.5 hours on the charmm.

If LHC is sending out these WUs with the SAME estimate as their "normal" WUs, then they're the ones causing the problem - if these monsters take twice as long, it's up to them to make the estimate on them twice as long.

ID: 453 · Report as offensive
itenginerd
Avatar

Send message
Joined: 6 Sep 05
Posts: 27
United States
Message 455 - Posted: 19 Sep 2005, 21:08:51 UTC - in response to Message 453.  
Last modified: 19 Sep 2005, 21:09:17 UTC

remember, it's a correction factor to their estimate


1) Interesting. Didn't realize that the projects were the ones generating the base estimates for their WUs. You learn something new every day. 8)

2) LHC do something on or ahead of the curve? Right. I'll look for that just after -insert name of hot chick- starts pulling my shots at the local caffiene hut.

3) Still, this is an interesting issue. I wonder if any of the mainstream users think about things like that when they're clamoring for features like this. Oh. Yeah. See that last sentence under #2....

(j)
James
ID: 455 · Report as offensive
Paul D. Buck

Send message
Joined: 29 Aug 05
Posts: 225
Message 461 - Posted: 20 Sep 2005, 4:02:33 UTC

LHC@Home is a little different in that their work units are spread by factor of 100 in run time with the same application. So, that is going to be a tad of a problem ...
ID: 461 · Report as offensive

Message boards : BOINC Manager : Caveat: Change in WU SIZE = Change in WU CACHE

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.