-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactoring reducedim and related functions #7106
Conversation
end | ||
end # let mapreducedim_fcache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may be missing something, but why not use @ngenerate
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original code also has a cache explicitly instantiated. In this part, I basically followed what was done before. But yes, @ngenerate
may be used here. I will look at this later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original code generated an F function for every function argument, not just for every number of dimensions, which is why @ngenerate
was not sufficient. It looks like you no longer do that, so @ngenerate
should suffice. OTOH this could cause performance regressions for code that was relying on the old reducedim
behavior.
|
end | ||
end | ||
eval(ngenerate(:N, :(typeof(R)), | ||
:(_mapreducedim!{T,N}(f, op, R::AbstractArray, A::AbstractArray{T,N})), mapreducedim_body)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be simplified to use the standard syntax for @ngenerate
instead of eval(ngenerate(...
. See the Cartesian docs and multidimensional.jl
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is doable. Just that personally, I feel it is more organized (and easier to debug) to have a separated body-generating function (especially when the body generating code is substantial).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
macroexpand
makes it less important to have a separate body-generating function. The main reason for the pain of eval(ngenerate(...
is when your dict lookup sometimes has to be more complicated than just the dimensionality.
Done, the macro |
reducedim_init{T}(f, op::MaxFun, A::AbstractArray{T}, region) = reducedim_initarray0(A, region, typemin(evaluate(f, zero(T)))) | ||
reducedim_init{T}(f, op::MinFun, A::AbstractArray{T}, region) = reducedim_initarray0(A, region, typemax(evaluate(f, zero(T)))) | ||
reducedim_init{T}(f::Union(AbsFun,Abs2Fun), op::MaxFun, A::AbstractArray{T}, region) = | ||
reducedim_initarray(A, region, zero(evaluate(f, zero(T)))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be reducedim_initarray0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When a
is empty, zero is a reasonable result for maxabs(a)
.
Refactoring reducedim and related functions
This PR is along the same line as #7061, which improves the generality and coherence of the codes.
The core implementation is done in
mapreducedim!
, and everything else is just a light-weight wrapper of this function.