Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Kernel support for returning all available OpenCL hardware information #38

Open
GoogleCodeExporter opened this issue Jul 7, 2015 · 2 comments

Comments

@GoogleCodeExporter
Copy link

OpenCL allows the developer to query the underlying hardware for available 
information which can then be used at runtime to determine appropriate kernel 
parameters. We are specifically interested in this information in order to 
properly partition our data based on the available GPU memory constraints on 
the deployed hardware platform.

Ideally, this would be returned in a Map<String,String> or Map<Enum,String>.

For example:

CL_DEVICE_ADDRESS_BITS
CL_DEVICE_AVAILABLE
CL_DEVICE_COMPILER_AVAILABLE
CL_DEVICE_ENDIAN_LITTLE
CL_DEVICE_ERROR_CORRECTION_SUPPORT
CL_DEVICE_EXECUTION_CAPABILITIES
CL_DEVICE_EXTENSIONS
CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE
CL_DEVICE_GLOBAL_MEM_CACHE_SIZE
CL_DEVICE_GLOBAL_MEM_CACHE_TYPE
CL_DEVICE_GLOBAL_MEM_SIZE
CL_DEVICE_HOST_UNIFIED_MEMORY
CL_DEVICE_IMAGE2D_MAX_HEIGHT
CL_DEVICE_IMAGE2D_MAX_WIDTH
CL_DEVICE_IMAGE3D_MAX_DEPTH
CL_DEVICE_IMAGE3D_MAX_HEIGHT
CL_DEVICE_IMAGE3D_MAX_WIDTH
CL_DEVICE_IMAGE_SUPPORT
CL_DEVICE_LOCAL_MEM_SIZE
CL_DEVICE_LOCAL_MEM_TYPE
CL_DEVICE_MAX_CLOCK_FREQUENCY
CL_DEVICE_MAX_COMPUTE_UNITS
CL_DEVICE_MAX_CONSTANT_ARGS
CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE
CL_DEVICE_MAX_MEM_ALLOC_SIZE
CL_DEVICE_MAX_PARAMETER_SIZE
CL_DEVICE_MAX_READ_IMAGE_ARGS
CL_DEVICE_MAX_SAMPLERS
CL_DEVICE_MAX_WORK_GROUP_SIZE
CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS
CL_DEVICE_MAX_WORK_ITEM_SIZES
CL_DEVICE_MAX_WRITE_IMAGE_ARGS
CL_DEVICE_MEM_BASE_ADDR_ALIGN
CL_DEVICE_MIN_DATA_TYPE_ALIGN_SIZE
CL_DEVICE_NAME
CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR
CL_DEVICE_NATIVE_VECTOR_WIDTH_DOUBLE
CL_DEVICE_NATIVE_VECTOR_WIDTH_FLOAT
CL_DEVICE_NATIVE_VECTOR_WIDTH_INT
CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG
CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT
CL_DEVICE_OPENCL_C_VERSION
CL_DEVICE_PREFERRED_VECTOR_WIDTH_CHAR
CL_DEVICE_PREFERRED_VECTOR_WIDTH_DOUBLE
CL_DEVICE_PREFERRED_VECTOR_WIDTH_FLOAT
CL_DEVICE_PREFERRED_VECTOR_WIDTH_INT
CL_DEVICE_PREFERRED_VECTOR_WIDTH_LONG
CL_DEVICE_PREFERRED_VECTOR_WIDTH_SHORT
CL_DEVICE_PROFILE
CL_DEVICE_PROFILING_TIMER_RESOLUTION
CL_DEVICE_QUEUE_PROPERTIES
CL_DEVICE_SINGLE_FP_CONFIG
CL_DEVICE_TYPE
CL_DEVICE_VENDOR
CL_DEVICE_VENDOR_ID
CL_DEVICE_VERSION
CL_DRIVER_VERSION
CL_PLATFORM_EXTENSIONS
CL_PLATFORM_NAME
CL_PLATFORM_PROFILE
CL_PLATFORM_VENDOR
CL_PLATFORM_VERSION

Original issue reported on code.google.com by [email protected] on 14 Feb 2012 at 8:50

@GoogleCodeExporter
Copy link
Author

At present we can only interact with OpenCL devices during a Kernel.execute() 
dispatch call.  So to be able to query outside of a dispatch will require use 
to expose devices somehow.

I could imagine a helper (OpenCLHelper class) which we could use to query
devices, capabilities and parameters.  This could even return a notion of
a device which we could use during execution time to override our current
notion of mode.


So something like 

Device d = null
for (Device possibleDevice: OpenCLHelper.getDevices()){
    if (possibleDevice.isGPU() && possibleDevice.getSomeParameter()>someCriteria){
       d = possibleDevice;
       break;
    }
}  

Kernel.setDevice(d);
Range range = Range.create(global, /* some local groups size value based on 
device characteristics */);
kernel.execute(range);

Alternatively I think jocl has the notion of getBestDevice(), which I like a 
lot. 

So 
Device d = OpenCLHelper.getBestDevice();
Kernel.setDevice(d);
Range range = Range.create(global, /* some local groups size value based on 
device characteristics */);
kernel.execute(range);

So if we set a Device it overrides any mode (what should fallback do?)

This might also allow us to construct 'pseudo devices' to span physical 
devices. So group all GPU devices into device entity so we can dispatch across 
devices.  


>But this will require some more thought/work ;)

Original comment by [email protected] on 14 Feb 2012 at 9:34

  • Changed state: Accepted
  • Added labels: Type-Enhancement
  • Removed labels: Type-Defect

@GoogleCodeExporter
Copy link
Author

Those are very good suggestions and definitely seems to lead down the road to 
multi-GPU development.

After discussing this idea more, we could even use this list of devices concept 
to investigate more complicated patterns, such as deciding if an application 
should use one card for OpenCL and another card for OpenGL (to try and maximize 
both cards).

Original comment by [email protected] on 14 Feb 2012 at 10:27

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant