Skip to content

Commit

Permalink
use nd_as_jl to re-write accuracy
Browse files Browse the repository at this point in the history
  • Loading branch information
pluskid committed Nov 10, 2015
1 parent 2055727 commit ea90b55
Show file tree
Hide file tree
Showing 3 changed files with 92 additions and 9 deletions.
71 changes: 70 additions & 1 deletion docs/api/ndarray.rst
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ Copying functions

.. function:: convert(::Type{Array{T}}, arr :: NDArray)

Convert an :class:`NDArray` into a Julia ``Array`` of specific type.
Convert an :class:`NDArray` into a Julia ``Array`` of specific type. Data will be copied.



Expand Down Expand Up @@ -296,6 +296,75 @@ Basic arithmetics



Manipulating as Julia Arrays
----------------------------

.. function:: @nd_as_jl(captures..., statement)

A convenient macro that allows to operate :class:`NDArray` as Julia Arrays. For example,

.. code-block:: julia
x = mx.zeros(3,4)
y = mx.ones(3,4)
z = mx.zeros((3,4), mx.gpu())
@mx.nd_as_jl ro=(x,y) rw=z begin
# now x, y, z are just ordinary Julia Arrays
z[:,1] = y[:,2]
z[:,2] = 5
end
Under the hood, the macro convert all the declared captures from :class:`NDArray` into Julia
Arrays, by using :func:`try_get_shared`. And automatically commit the modifications back into
the :class:`NDArray` that is declared as ``rw``. This is useful for fast prototyping and when
implement non-critical computations, such as :class:`AbstractEvalMetric`.

.. note::

- Multiple ``rw`` and / or ``ro`` capture declaration could be made.
- The macro does **not** check to make sure that ``ro`` captures are not modified. If the
original :class:`NDArray` lives in CPU memory, then it is very likely the corresponding
Julia Array shares data with the :class:`NDArray`, so modifying the Julia Array will also
modify the underlying :class:`NDArray`.
- More importantly, since the :class:`NDArray` is
asynchronized, we will wait for *writing* for ``rw`` variables but wait only for *reading*
in ``ro`` variables. If we write into those ``ro`` variables, **and** if the memory is
shared, racing condition might happen, and the behavior is undefined.
- When an :class:`NDArray` is declared to be captured as ``rw``, its contents is always sync
back in the end.
- The execution results of the expanded macro is always ``nothing``.
- The statements are wrapped in a ``let``, thus locally introduced new variables will not be
available after the statements. So you will need to declare the variables before calling the
macro if needed.




.. function:: try_get_shared(arr)

Try to create a Julia array by sharing the data with the underlying :class:`NDArray`.

:param NDArray arr: the array to be shared.

.. warning::

The returned array does not guarantee to share data with the underlying :class:`NDArray`.
In particular, data sharing is possible only when the :class:`NDArray` lives on CPU.




.. function:: is_shared(j_arr, arr)

Test whether ``j_arr`` is sharing data with ``arr``.

:param Array j_arr: the Julia Array.
:param NDArray arr: the :class:`NDArray`.




IO
--

Expand Down
15 changes: 7 additions & 8 deletions src/metric.jl
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,13 @@ type Accuracy <: AbstractEvalMetric
end

function _update_single_output(metric :: Accuracy, label :: NDArray, pred :: NDArray)
label = copy(label)
pred = copy(pred)

n_sample = size(pred)[end]
metric.n_sample += n_sample
for i = 1:n_sample
klass = indmax(pred[:,i])
metric.acc_sum += (klass-1) == label[i]
@nd_as_jl ro=(label,pred) begin
n_sample = size(pred)[end]
metric.n_sample += n_sample
for i = 1:n_sample
klass = indmax(pred[:,i])
metric.acc_sum += (klass-1) == label[i]
end
end
end

Expand Down
15 changes: 15 additions & 0 deletions src/ndarray.jl
Original file line number Diff line number Diff line change
Expand Up @@ -615,6 +615,10 @@ Manipulating as Julia Arrays
original :class:`NDArray` lives in CPU memory, then it is very likely the corresponding
Julia Array shares data with the :class:`NDArray`, so modifying the Julia Array will also
modify the underlying :class:`NDArray`.
- More importantly, since the :class:`NDArray` is
asynchronized, we will wait for *writing* for ``rw`` variables but wait only for *reading*
in ``ro`` variables. If we write into those ``ro`` variables, **and** if the memory is
shared, racing condition might happen, and the behavior is undefined.
- When an :class:`NDArray` is declared to be captured as ``rw``, its contents is always sync
back in the end.
- The execution results of the expanded macro is always ``nothing``.
Expand Down Expand Up @@ -665,6 +669,8 @@ macro nd_as_jl(m_args...)
rw_origs = [gensym() for _ in nd_rw]

save_statements = Expr(:block, [:($v_orig = $v) for (v_orig, v) in zip(rw_origs, nd_rw)]...)
wait_statements = Expr(:block, [:(_wait_to_read($v)) for v in nd_ro]...,
[:(_wait_to_write($v)) for v in nd_rw]...)
clear_statements = Expr(:block, [:($v_orig = nothing) for v_orig in rw_origs]...)
let_assignments = [:($v = try_get_shared($v)) for v in nd_all]
sync_statements = map(rw_origs, nd_rw) do v_orig, v
Expand All @@ -678,9 +684,11 @@ macro nd_as_jl(m_args...)
sync_statements = Expr(:block, sync_statements...)

let_statement = Expr(:let, quote
$stmts
$sync_statements
end, let_assignments...)
m_body = quote
$wait_statements
$save_statements
$let_statement
$clear_statements
Expand All @@ -698,6 +706,13 @@ function pointer(arr :: NDArray)
@mxcall(:MXNDArrayGetData, (MX_handle, Ref{Ptr{MX_float}}), arr, pdata)
return pdata[]
end
function _wait_to_read(arr :: NDArray)
@mxcall(:MXNDArrayWaitToRead, (MX_handle,), arr)
end
function _wait_to_write(arr :: NDArray)
@mxcall(:MXNDArrayWaitToWrite, (MX_handle,), arr)
end

#=doc
.. function:: try_get_shared(arr)
Expand Down

0 comments on commit ea90b55

Please sign in to comment.