Thread 'Milky Way fails on my GPU'

Message boards : GPUs : Milky Way fails on my GPU
Message board moderation

To post messages, you must log in.

AuthorMessage
Profilemitrichr
Avatar

Send message
Joined: 21 May 07
Posts: 349
United States
Message 46843 - Posted: 18 Dec 2012, 13:07:50 UTC

Running BOINC 7.0.28, i7-3930k, 16 gigs DRAM.
Twin GTX670's but with SLI enabled (both BOINC and my builder Maingear want Dev 0, so this is the best solution).

I am set in cc_config to ignore dev 1. I only run one GPU work unit at any time.

I am succeeding with GPU work with Einstein and SETI, but many many immediate 1 second failures on Milky Way. I took this to their forum, lots of people tried to help; but to no avail.

I am succeeding with Milky Way big time on CPU work units, 4-6,000 credits a day.

But, you know, I would like to run GPU for this project.
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

ID: 46843 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15563
Netherlands
Message 46846 - Posted: 18 Dec 2012, 14:18:29 UTC
Last modified: 18 Dec 2012, 14:19:21 UTC

Not many tasks left over with an error in them, only one. So I'll lift the information from it, but it sounds to me like your GPUs ran out of memory or other resources.

<core_client_version>7.0.28</core_client_version>
<![CDATA[
<message>
 - exit code -5 (0xfffffffb)
</message>
<stderr_txt>
BOINC: parse gpu_opencl_dev_index 0
<search_application> milkyway_separation 1.02 Windows x86_64 double OpenCL </search_application>
Unrecognized XML in project preferences: max_gfx_cpu_pct
Skipping: 20
Skipping: /max_gfx_cpu_pct
Unrecognized XML in project preferences: allow_non_preferred_apps
Skipping: 1
Skipping: /allow_non_preferred_apps
Unrecognized XML in project preferences: nbody_graphics_poll_period
Skipping: 30
Skipping: /nbody_graphics_poll_period
Unrecognized XML in project preferences: nbody_graphics_float_speed
Skipping: 5
Skipping: /nbody_graphics_float_speed
Unrecognized XML in project preferences: nbody_graphics_textured_point_size
Skipping: 250
Skipping: /nbody_graphics_textured_point_size
Unrecognized XML in project preferences: nbody_graphics_point_point_size
Skipping: 40
Skipping: /nbody_graphics_point_point_size
BOINC GPU type suggests using OpenCL vendor 'NVIDIA Corporation'
Error loading Lua script 'astronomy_parameters.txt': [string "number_parameters: 4..."]:1: '<name>' expected near '4' 
Error reading astronomy parameters from file 'astronomy_parameters.txt'
  Trying old parameters file
Using AVX path
Found 2 platforms
Platform 0 information:
  Name:       NVIDIA CUDA
  Version:    OpenCL 1.1 CUDA 4.2.1
  Vendor:     NVIDIA Corporation
  Extensions: cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_d3d9_sharing cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll 
  Profile:    FULL_PROFILE
Platform 1 information:
  Name:       NVIDIA CUDA
  Version:    OpenCL 1.1 CUDA 4.2.1
  Vendor:     NVIDIA Corporation
  Extensions: cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_d3d9_sharing cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll 
  Profile:    FULL_PROFILE
Using device 0 on platform 0
Found 2 CL devices
Device 'GeForce GTX 670' (NVIDIA Corporation:0x10de) (CL_DEVICE_TYPE_GPU)
Driver version:      306.97
Version:             OpenCL 1.1 CUDA
Compute capability:  3.0
Max compute units:   7
Clock frequency:     1045 Mhz
Global mem size:     2147483648
Local mem size:      49152
Max const buf size:  65536
Double extension:    cl_khr_fp64
Error creating context (-5): CL_OUT_OF_RESOURCES
Error getting device and context (-5): CL_OUT_OF_RESOURCES
Failed to calculate likelihood
<background_integral> 1.#QNAN0000000000 </background_integral>
<stream_integral>  1.#QNAN0000000000  1.#QNAN0000000000  1.#QNAN0000000000 </stream_integral>
<background_likelihood> 1.#QNAN0000000000 </background_likelihood>
<stream_only_likelihood>  1.#QNAN0000000000  1.#QNAN0000000000  1.#QNAN0000000000 </stream_only_likelihood>
<search_likelihood> 1.#QNAN0000000000 </search_likelihood>
10:16:09 (1936): called boinc_finish

</stderr_txt>
]]>

Error -5 is a science application error.
CL_OUT_OF_RESOURCES happens when the GPU has literally run out of resources it can use. Only one of the meanings is that it's run out of memory, it can e.g. also run out processors.

From Khronos:
CL_OUT_OF_RESOURCES if there is a failure to queue the execution instance of kernel on the command-queue because of insufficient resources needed to execute the kernel. For example, the explicitly specified local_work_size causes a failure to execute the kernel because of insufficient resources such as registers or local memory. Another example would be the number of read-only image args used in kernel exceed the CL_DEVICE_MAX_READ_IMAGE_ARGS value for device or the number of write-only image args used in kernel exceed the CL_DEVICE_MAX_WRITE_IMAGE_ARGS value for device or the number of samplers used in kernel exceed CL_DEVICE_MAX_SAMPLERS for device.


So you'll have to go back to the Milkyway forums and ask them about that.
ID: 46846 · Report as offensive
Profilemitrichr
Avatar

Send message
Joined: 21 May 07
Posts: 349
United States
Message 46859 - Posted: 18 Dec 2012, 18:18:12 UTC

Hey Jord-

Thanks. How are you? It's very strange. SETI ans EINSTEIN GPU work units do quite well. I wanted also to run Milky Way on GPU, my blog has taken a definite tilt toward Astronomy.

I was originally running GPUGrid, very successfully, but I found that I needed to drop that project.

GPU processing is very strange. I think that the projects do not play well with each other.

The resources should not be a problem. The i7-3930k is a hyper threaded six core, I am only crunching CPU work units on nine cores (threads). The machine has 16 gigs of memory and the machine does nothing but crunch. So, memory should not be an issue.

At another project, I had been warned away from Milky Way on GPU on GTX 670's; the project said they should be fine.

For now, I am just going to be satisfied running this project on CPU on four machines, doing about 4-6,000 credits per day, at 213.76 credits per WU regardless of which machine.

Have a great Christmas holiday.
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

ID: 46859 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15563
Netherlands
Message 46860 - Posted: 18 Dec 2012, 18:26:57 UTC - in response to Message 46859.  

The resources should not be a problem.

Wrong resources. It's not resources as in 'everything capable of doing calculations in your computer', or 'memory and disk values'. It's resources that OpenCL uses to set up the kernels, store kernels, do calculations on them.

So definitely something in the science application and thus something that Milkyway will want to know about. The thing I quoted from Khronos, you may want to copy & paste that in your thread at Milkyway, they may want to know about it.

Khronos are the developers of OpenCL, it's their explanation of what CL_OUT_OF_RESOURCES stands for.

So it has nothing to do with how well Einstein and Seti run on your GPU. Neither Seti nor Einstein's applications have been compiled by the people from Milkyway. Only their own application has been compiled by them and it's this thing that gives that error. So please stop searching outside the box. It's not there to be found. :-)
ID: 46860 · Report as offensive
Profilemitrichr
Avatar

Send message
Joined: 21 May 07
Posts: 349
United States
Message 46861 - Posted: 18 Dec 2012, 19:32:01 UTC

ID: 46861 · Report as offensive
Profilemitrichr
Avatar

Send message
Joined: 21 May 07
Posts: 349
United States
Message 46903 - Posted: 23 Dec 2012, 14:19:56 UTC
Last modified: 23 Dec 2012, 14:20:07 UTC

Jord-

I did some more searching about Milky Way and GTX670's, and saw comments that the nVidia cards lacked good "double precision", required by Milky Way but not SETI, EINSTEIN or GPUGrid.
http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

ID: 46903 · Report as offensive
nanoprobe

Send message
Joined: 9 Apr 12
Posts: 51
United States
Message 47026 - Posted: 3 Jan 2013, 4:10:13 UTC - in response to Message 46903.  

Jord-

I did some more searching about Milky Way and GTX670's, and saw comments that the nVidia cards lacked good "double precision", required by Milky Way but not SETI, EINSTEIN or GPUGrid.

That is correct. Nvidia has so crippled DP capabilities on even their high end consumer cards as to make them useless for MW@Home or any other future project that may require DP. They've done that to protect the integrity (aka bottom line)of their Tesla cards.
ID: 47026 · Report as offensive
kdsjsdj

Send message
Joined: 4 Jan 13
Posts: 5
Message 47045 - Posted: 4 Jan 2013, 12:27:02 UTC - in response to Message 47026.  

I bought a GTX 570 a year ago and have been happy with it, wouldn't crunch Milky Way if it was the last BOINC project standing anyway.

What nvidia has done to DP turns me right off of their GTX 6xx lineup. I definitely will not be buying one and for sure nothing from their Tesla lineup either as it has just too much integrity for my bottom line. I'll see who develops for AMD. They're priced right and reports I hear indicate their latest offerings scream. I'd buy one for the right project.
ID: 47045 · Report as offensive
Profilemitrichr
Avatar

Send message
Joined: 21 May 07
Posts: 349
United States
Message 47046 - Posted: 4 Jan 2013, 16:06:39 UTC - in response to Message 47045.  

I bought a GTX 570 a year ago and have been happy with it, wouldn't crunch Milky Way if it was the last BOINC project standing anyway.


Interesting, what do you dislike about Milky Way?

Thanks.

http://sciencesprings.wordpress.com
http://facebook.com/sciencesprings

ID: 47046 · Report as offensive
kdsjsdj

Send message
Joined: 4 Jan 13
Posts: 5
Message 47048 - Posted: 4 Jan 2013, 23:17:01 UTC - in response to Message 47046.  

It's a rogue project that thinks it's the only project worthy of your resources. They pay hugely inflated credits to steal crunchers from other projects. That is lower than a snake's belly in a wagon rut, despicable, unforgivable. If their server blew up tomorrow I would say "Good riddance, hope your next server blows up too!"

I heard they changed their behavior but they haven't changed the management or the moderators that perpetrated the scam. The same pr***s are still running the show so they can go pound sand IMHO. I wouldn't crunch their tasks even if they paid me real cash to do it.
ID: 47048 · Report as offensive

Message boards : GPUs : Milky Way fails on my GPU

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.