Skip to content

Commit f83b562

Browse files
authored
[GRCUDA-4] replaced grCUDA with GrCUDA in all files (exception: variable names starting with grCUDA, i.e. grCUDAExecutionContext) and filenames (actually no file or folder had grCUDA in their names) (#6)
1 parent f8cb9f9 commit f83b562

File tree

12 files changed

+75
-75
lines changed

12 files changed

+75
-75
lines changed

LICENSE

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,5 +25,5 @@ OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
2525
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2626

2727

28-
grCUDA depends on Truffle APIs licensed under the Universal Permissive
28+
GrCUDA depends on Truffle APIs licensed under the Universal Permissive
2929
License (UPL), Version 1.0 (https://opensource.org/licenses/UPL).

README.md

+15-15
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# grCUDA: Polyglot GPU Access in GraalVM
1+
# GrCUDA: Polyglot GPU Access in GraalVM
22

33
This Truffle language exposes GPUs to the polyglot [GraalVM](http://www.graalvm.org). The goal is to
44

@@ -15,23 +15,23 @@ Supported and tested GraalVM languages:
1515
- Java
1616
- C and Rust through the Graal Sulong Component
1717

18-
A description of grCUDA and its the features can be found in the [grCUDA documentation](docs/grcuda.md).
18+
A description of GrCUDA and its the features can be found in the [GrCUDA documentation](docs/grcuda.md).
1919

2020
The [bindings documentation](docs/bindings.md) contains a tutorial that shows
2121
how to bind precompiled kernels to callables, compile and launch kernels.
2222

2323
**Additional Information:**
2424

25-
- [grCUDA: A Polyglot Language Binding for CUDA in GraalVM](https://devblogs.nvidia.com/grcuda-a-polyglot-language-binding-for-cuda-in-graalvm/). NVIDIA Developer Blog,
25+
- [GrCUDA: A Polyglot Language Binding for CUDA in GraalVM](https://devblogs.nvidia.com/grcuda-a-polyglot-language-binding-for-cuda-in-graalvm/). NVIDIA Developer Blog,
2626
November 2019.
27-
- [grCUDA: A Polyglot Language Binding](https://youtu.be/_lI6ubnG9FY). Presentation at Oracle CodeOne 2019, September 2019.
27+
- [GrCUDA: A Polyglot Language Binding](https://youtu.be/_lI6ubnG9FY). Presentation at Oracle CodeOne 2019, September 2019.
2828
- [Simplifying GPU Access](https://developer.nvidia.com/gtc/2020/video/s21269-vid). Presentation at NVIDIA GTC 2020, March 2020.
2929
- [DAG-based Scheduling with Resource Sharing for Multi-task Applications in a Polyglot GPU Runtime](https://ieeexplore.ieee.org/abstract/document/9460491). Paper at IPDPS 2021 on the GrCUDA scheduler, May 2021. [Video](https://youtu.be/QkX0FHDRyxA) of the presentation.
3030

31-
## Using grCUDA in the GraalVM
31+
## Using GrCUDA in the GraalVM
3232

33-
grCUDA can be used in the binaries of the GraalVM languages (`lli`, `graalpython`,
34-
`js`, `R`, and `ruby)`. The JAR file containing grCUDA must be appended to the classpath
33+
GrCUDA can be used in the binaries of the GraalVM languages (`lli`, `graalpython`,
34+
`js`, `R`, and `ruby)`. The JAR file containing GrCUDA must be appended to the classpath
3535
or copied into `jre/languages/grcuda` of the Graal installation. Note that `--jvm`
3636
and `--polyglot` must be specified in both cases as well.
3737

@@ -47,7 +47,7 @@ __global__ void increment(int *arr, int n) {
4747
arr[idx] += 1;
4848
}
4949
}`
50-
const cu = Polyglot.eval('grcuda', 'CU') // get grCUDA namespace object
50+
const cu = Polyglot.eval('grcuda', 'CU') // get GrCUDA namespace object
5151
const incKernel = cu.buildkernel(
5252
kernelSource, // CUDA kernel source code string
5353
'increment', // kernel name
@@ -126,7 +126,7 @@ Documentation on [polyglot kernel launches](docs/launchkernel.md).
126126

127127
## Installation
128128

129-
grCUDA can be downloaded as a binary JAR from [grcuda/releases](https://github.com/NVIDIA/grcuda/releases) and manually copied into a GraalVM installation.
129+
GrCUDA can be downloaded as a binary JAR from [grcuda/releases](https://github.com/NVIDIA/grcuda/releases) and manually copied into a GraalVM installation.
130130

131131
1. Download GraalVM CE 21.1.0 for Linux `graalvm-ce-java11-linux-amd64-21.1.0.tar.gz`
132132
from [GitHub](https://github.com/graalvm/graalvm-ce-builds/releases/download/vm-21.1.0/graalvm-ce-java11-linux-amd64-21.1.0.tar.gz) and untar it in your
@@ -138,15 +138,15 @@ grCUDA can be downloaded as a binary JAR from [grcuda/releases](https://github.c
138138
export GRAALVM_DIR=`pwd`/graalvm-ce-java11-21.1.0
139139
```
140140

141-
2. Download the grCUDA JAR from [grcuda/releases](https://github.com/NVIDIA/grcuda/releases). If using the official release, the latest features (e.g. the asynchronous scheduler) are not available. Instead, follow the guide below to install GrCUDA from the source code.
141+
2. Download the GrCUDA JAR from [grcuda/releases](https://github.com/NVIDIA/grcuda/releases). If using the official release, the latest features (e.g. the asynchronous scheduler) are not available. Instead, follow the guide below to install GrCUDA from the source code.
142142

143143
```console
144144
cd $GRAALVM_DIR/jre/languages
145145
mkdir grcuda
146146
cp <download folder>/grcuda-0.1.0.jar grcuda
147147
```
148148

149-
3. Test grCUDA in Node.JS from GraalVM.
149+
3. Test GrCUDA in Node.JS from GraalVM.
150150

151151
```console
152152
cd $GRAALVM_DIR/bin
@@ -165,13 +165,13 @@ grCUDA can be downloaded as a binary JAR from [grcuda/releases](https://github.c
165165
./gu install ruby
166166
```
167167

168-
## Instructions to build grCUDA from Sources
168+
## Instructions to build GrCUDA from Sources
169169

170-
grCUDA requires the [mx build tool](https://github.com/graalvm/mx). Clone the mx
170+
GrCUDA requires the [mx build tool](https://github.com/graalvm/mx). Clone the mx
171171
repository and add the directory into `$PATH`, such that the `mx` can be invoked from
172172
the command line.
173173

174-
Build grCUDA and the unit tests:
174+
Build GrCUDA and the unit tests:
175175

176176
```console
177177
cd <directory containing this README>
@@ -186,7 +186,7 @@ To run unit tests:
186186
mx unittest com.nvidia
187187
```
188188

189-
## Using grCUDA in a JDK
189+
## Using GrCUDA in a JDK
190190

191191
Make sure that you use the [OpenJDK+JVMCI-21.1](https://github.com/graalvm/labs-openjdk-11/releases/download/jvmci-21.1-b05/labsjdk-ce-11.0.11+8-jvmci-21.1-b05-linux-amd64.tar.gz).
192192

docs/bindings.md

+13-13
Original file line numberDiff line numberDiff line change
@@ -3,28 +3,28 @@
33
GPU kernels and host function can be executed as function calls.
44
The corresponding functions are callable objects that are bound
55
to the respective kernel or host functions.
6-
grCUDA provides different ways to define these bindings:
6+
GrCUDA provides different ways to define these bindings:
77

88
- `bind(shareLibraryFile, functionNameAndSignature)` returns a callable
99
object to the specified host function defined in the shared library (.so file).
1010
- `bindkernel(fileName, kernelNameAndSignature)` returns a callable object
1111
to specified kernel function defined in PTX or cubin file.
1212
- `bindall(targetNamespace, fileName, nidlFileName)` registers all functions
1313
listed in the NIDL (Native Interface Definition Language) for the
14-
specified binary file into the target namespace of grCUDA.
14+
specified binary file into the target namespace of GrCUDA.
1515

1616
The first two approaches are useful to implement the one-off binding to
1717
a native function, be it a kernel or a host function. `bindall()` is use
1818
to bind multiple functions from the same binary or PTX file. This tutorial shows
1919
how to call existing native host functions and kernels from GraalVM languages
20-
through grCUDA.
20+
through GrCUDA.
2121

2222
## Binding and Invoking prebuilt Host Functions
2323

2424
Host functions can be bound from existing shared libraries by `bind()` or
2525
`bindall()`. The former returns one single native function as a callable object
2626
whereas later binds can be used to bind multiple functions into a specified
27-
namespace within grCUDA.
27+
namespace within GrCUDA.
2828

2929
This simple example shows how to call two host functions from a shared library.
3030
One function is defined a C++ namespace. The other function is defined as
@@ -73,7 +73,7 @@ Build the shared library (Linux).
7373
nvcc -o libincrement.so -std=c++11 --shared -Xcompiler -fPIC increment.cu
7474
```
7575

76-
`bind()` can be used to "import" a single function into grCUDA as shown in
76+
`bind()` can be used to "import" a single function into GrCUDA as shown in
7777
the following NodeJS/JavaScript example.
7878

7979
```javascript
@@ -107,7 +107,7 @@ for (const el of deviceArray) {
107107

108108
`bind()` takes the name (or path) of the shared library. The second argument
109109
specifies the signature in NIDL format. Add the keyword `cxx` for C++ style functions. The C++ namespace can be specified using `::`. Without `cxx`
110-
grCUDA assumes C linkage of the function and does not apply any name mangling.
110+
GrCUDA assumes C linkage of the function and does not apply any name mangling.
111111
`bind()` returns the function objects as callables, i.e., `TruffleObject`
112112
instances for which `isExecutable()` is `true`.
113113

@@ -191,7 +191,7 @@ nvcc -cubin -gencode=arch=compute_75,code=sm_75 \
191191
-o increment.cubin increment_kernels.cu
192192
```
193193

194-
`bindkernel()` "imports" a single kernel function into grCUDA. `bindkernel()`
194+
`bindkernel()` "imports" a single kernel function into GrCUDA. `bindkernel()`
195195
returns the kernel as a callable object. It can be called like a function.
196196
The parameters are the kernel grid size and as optional the amount dynamic shared
197197
memory. This is analogous to the kernel launch configuration in CUDA that is
@@ -275,7 +275,7 @@ If the a kernel function is not declared with the `extern "C"`
275275
`nvcc` generates C++ symbols for kernel functions. Such kernels can be enclosed
276276
in a `kernels` scope in the NIDL file and subsequently bound in one step.
277277
As in `hostfuncs` for C++ host functions, a C++ namespace can also be
278-
specified in `kernels`. grCUDA the searches all functions within the scope
278+
specified in `kernels`. GrCUDA the searches all functions within the scope
279279
in this namespace.
280280

281281
Kernel function defined with `extern "C"` can bound in a `ckernels` scope.
@@ -288,7 +288,7 @@ e.g., `increment`.
288288

289289
## Runtime-compilation of GPU Kernels from CUDA C/C++
290290

291-
grCUDA can also compile GPU kernels directly from CUDA C/C++
291+
GrCUDA can also compile GPU kernels directly from CUDA C/C++
292292
source code passed as a host-string argument to
293293
`buildkernel(..)`. The signature of the function is:
294294

@@ -339,7 +339,7 @@ print(device_array)
339339
## Launching Kernels
340340

341341
Once a kernel function is bound to a callable host-object or registered as
342-
a function within grCUDA, it can be launched like a function with two argument lists (for exceptions in Ruby and Java and Ruby see the examples below).
342+
a function within GrCUDA, it can be launched like a function with two argument lists (for exceptions in Ruby and Java and Ruby see the examples below).
343343

344344
```test
345345
kernel(num_blocks, block_size)(arg1, ..., argN)
@@ -355,7 +355,7 @@ The first argument list corresponds to the launch configuration, i.e.,
355355
the kernel grid (number of blocks) and the block sizes (number of
356356
threads per block).
357357

358-
grCUDA currently only supports synchronous kernel launches,
358+
GrCUDA currently only supports synchronous kernel launches,
359359
i.e., there is an implicit `cudaDeviceSynchronize()` after every
360360
launch.
361361

@@ -372,8 +372,8 @@ configured_kernel = kernel(num_blocks, block_size)
372372
configured_kernel(out_arr, in_ar, num_elements)
373373
```
374374

375-
grCUDA also supports 2D and 3D kernel grids that are specified
376-
with the `dim3` in CUDA C/C++. In grCUDA `num_blocks` and `block_size`
375+
GrCUDA also supports 2D and 3D kernel grids that are specified
376+
with the `dim3` in CUDA C/C++. In GrCUDA `num_blocks` and `block_size`
377377
can be integers for 1-dimensional kernels or host language sequences
378378
of length 1, 2, or 3 (Lists or Tuples in Python, Arrays in JavaScript
379379
and Ruby, and vectors in R)

0 commit comments

Comments
 (0)