Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make things work for general AbstractArrays #980

Merged
merged 12 commits into from
Mar 1, 2017

Conversation

tkoolen
Copy link
Contributor

@tkoolen tkoolen commented Feb 24, 2017

A lot of functions currently require Array types, while they would work just fine for any other subtype of AbstractArray. This PR simply replaces most ::Arrays in function signatures by ::AbstractArray (and likewise for Vector and Matrix).

I defined norm only for AbstractVector and AbstractMatrix, not for general AbstractArrays, because Base only defines it for AbstractVector and AbstractMatrix, and defining it for AbstractArray results in an ambiguity error.

Please be extra careful reviewing my changes to _multiply! and _multiplyt! in operators.jl.

This came up while I was playing around with storing JuMP Variables in AxisArrays for a trajectory optimization style problem, because I had trouble keeping track of the order of the dimensions of the variable arrays (this is proving to be quite a nice combination of tools by the way).

@tkoolen
Copy link
Contributor Author

tkoolen commented Feb 24, 2017

cc: @rdeits

src/JuMP.jl Outdated
@@ -909,9 +909,9 @@ include("print.jl")
# Deprecations
include("deprecated.jl")

getvalue{T<:JuMPTypes}(arr::Array{T}) = map(getvalue, arr)
getvalue{T<:JuMPTypes}(arr::AbstractArray{T}) = map(getvalue, arr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed, use getvalue.(arr) instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So delete the method?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's been previously discussed. The conclusion for now is to allow getvalue on objects that JuMP creates, and JuMP can create Array{Variable}, e.g., PSD matrices.

src/JuMP.jl Outdated

function setvalue{T<:AbstractJuMPScalar}(set::Array{T}, val::Array)
function setvalue{T<:AbstractJuMPScalar}(set::AbstractArray{T}, val::AbstractArray)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also not needed, use setvalue.(set,val)

src/JuMPArray.jl Outdated
@@ -10,7 +10,7 @@ immutable JuMPArray{T,N,NT} <: JuMPContainer{T,N}
meta::Dict{Symbol,Any}
end

@generated function JuMPArray{T,N}(innerArray::Array{T,N}, indexsets::NTuple{N,Any})
@generated function JuMPArray{T,N}(innerArray::AbstractArray{T,N}, indexsets::NTuple{N,Any})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is needed

src/affexpr.jl Outdated
error("The operators <=, >=, and == can only be used to specify scalar constraints. If you are trying to add a vectorized constraint, use the element-wise dot comparison operators (.<=, .>=, or .==) instead")

function addVectorizedConstraint(m::Model, v::Array{LinearConstraint})
function addVectorizedConstraint(m::Model, v::AbstractArray{LinearConstraint})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an internal function, it's unclear how it could be called with an AbstractArray

src/callbacks.jl Outdated
@@ -59,7 +59,7 @@ function addinfocallback(m::Model, f::Function; when::Symbol = _unspecifiedstate
push!(m.callbacks, InfoCallback(f, when))
end

function lazycallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::Vector{LazyCallback})
function lazycallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::AbstractVector{LazyCallback})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Internal, won't be called with AbstractVector

src/callbacks.jl Outdated
@@ -92,7 +92,7 @@ function lazycallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::Vecto
:Continue
end

function attach_callbacks(m::Model, cbs::Vector{LazyCallback})
function attach_callbacks(m::Model, cbs::AbstractVector{LazyCallback})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Internal, won't be called with AbstractVector

src/callbacks.jl Outdated
@@ -101,7 +101,7 @@ function attach_callbacks(m::Model, cbs::Vector{LazyCallback})
end
end

function cutcallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::Vector{CutCallback})
function cutcallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::AbstractVector{CutCallback})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Internal, won't be called with AbstractVector

src/callbacks.jl Outdated
@@ -119,7 +119,7 @@ function cutcallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::Vector
:Continue
end

function attach_callbacks(m::Model, cbs::Vector{CutCallback})
function attach_callbacks(m::Model, cbs::AbstractVector{CutCallback})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Internal, won't be called with AbstractVector

src/callbacks.jl Outdated
@@ -129,7 +129,7 @@ function attach_callbacks(m::Model, cbs::Vector{CutCallback})
end


function heurcallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::Vector{HeuristicCallback})
function heurcallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::AbstractVector{HeuristicCallback})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Internal, won't be called with AbstractVector

src/callbacks.jl Outdated
@@ -147,7 +147,7 @@ function heurcallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::Vecto
:Continue
end

function attach_callbacks(m::Model, cbs::Vector{HeuristicCallback})
function attach_callbacks(m::Model, cbs::AbstractVector{HeuristicCallback})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Internal, won't be called with AbstractVector

src/callbacks.jl Outdated
@@ -156,7 +156,7 @@ function attach_callbacks(m::Model, cbs::Vector{HeuristicCallback})
end
end

function infocallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::Vector{InfoCallback})
function infocallback(d::MathProgBase.MathProgCallbackData, m::Model, cbs::AbstractVector{InfoCallback})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Internal, won't be called with AbstractVector

src/nlp.jl Outdated
@@ -169,7 +169,7 @@ type NLPEvaluator <: MathProgBase.AbstractNLPEvaluator
end
end

function simplify_expression(nd::Vector{NodeData}, const_values, subexpression_linearity, fixed_variables, parameter_values, x_values, subexpression_values)
function simplify_expression(nd::AbstractVector{NodeData}, const_values, subexpression_linearity, fixed_variables, parameter_values, x_values, subexpression_values)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None of the changes in this file are needed

src/operators.jl Outdated
@@ -370,7 +370,7 @@ function _multiply!{T<:JuMPTypes}(ret::Array{T}, lhs::Array, rhs::Array)
end

# this computes lhs.'*rhs and places it in ret
function _multiplyt!{T<:JuMPTypes}(ret::Array{T}, lhs::Array, rhs::Array)
function _multiplyt!{T<:JuMPTypes}(ret::AbstractArray{T}, lhs::AbstractArray, rhs::AbstractArray)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These methods are designed for dense matrices and shouldn't be widened

src/sos.jl Outdated
@@ -25,7 +25,7 @@ Base.copy(sos::SOSConstraint, new_model::Model) =

# Given a vector of affine expressions, extract a vector of the single
# variable in each expression and a vector of their coefficients
function constructSOS(m::Model, coll::Vector{AffExpr})
function constructSOS(m::Model, coll::AbstractVector{AffExpr})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes don't seem relevant

@mlubin
Copy link
Member

mlubin commented Feb 24, 2017

I'm willing to consider this use case, but the changes here are way too broad and touch a number of internal functions that would never be called with AbstractArray. Could you try to make the changes minimal with respect to what you need, and also add unit tests covering them?

@tkoolen
Copy link
Contributor Author

tkoolen commented Feb 24, 2017

Sure. Thanks for the quick review. I'd like to argue that AbstractArray should always be the default (unless things are specialized for dense arrays, like in _multiply!, of course), and restricting to Array is kind of arbitrary, even in internal code that is currently not being called with other types. But I'll try to make the changes minimal and add tests.

src/quadexpr.jl Outdated
@@ -95,7 +95,7 @@ function getvalue(a::QuadExpr)
end
return ret
end
getvalue(arr::Array{QuadExpr}) = map(getvalue, arr)
getvalue(arr::AbstractArray{QuadExpr}) = map(getvalue, arr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also getvalue.(arr) here

@rdeits
Copy link
Contributor

rdeits commented Feb 24, 2017

To motivate why this would be nice to have, here are some cool things that @tkoolen and I have been doing w.r.t. AxisArrays of JuMP variables: https://gist.github.com/rdeits/3371af92c9c4aa61353f38c9a91c02e9

@mlubin
Copy link
Member

mlubin commented Feb 25, 2017

AxisArrays look like a potential replacement for JuMPArrays.

@mlubin
Copy link
Member

mlubin commented Feb 25, 2017

restricting to Array is kind of arbitrary, even in internal code that is currently not being called with other types.

Personally I don't see much value in putting AbstractArray over no type restriction at all if we're already going to loosen the type beyond what's actually used and has been tested.

@rdeits
Copy link
Contributor

rdeits commented Feb 25, 2017

AxisArrays do seem like a great replacement for the categorical indexing in JuMPArray. I think they don't cover the ragged and non-1-based indexing that JuMPArray supports, though.

@tkoolen
Copy link
Contributor Author

tkoolen commented Feb 25, 2017

I've addressed your comments. If there are more places where you want things to be more specific, let me know.

I limited the test to cases that resulted in MethodErrors before this PR. There's quite a few cases that 'work' before this PR, but don't use the (I guess optimized) JuMP code, even though it would be applicable. I'm not sure how to test for these. For example,

m = Model()
x = view(@variable(m, [1: 3]), :)
A = rand(3, 3)

@show @which sum(x)
@show @which diagm(x)
@show @which norm(x)
@show @which A * x
@show @which x + 1
@show @which x * 1

prints

@which(sum(x)) = sum(a) at reduce.jl:229
@which(diagm(x)) = diagm{T}(v::AbstractArray{T,1}) at linalg/dense.jl:121
@which(norm(x)) = norm(x::AbstractArray{T<:Any,1}) at linalg/generic.jl:201
@which(A * x) = *{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},S}(A::Union{Base.ReshapedArray{T,2,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N<:Any}}},DenseArray{T,2},SubArray{T,2,A<:Union{Base.ReshapedArray{T<:Any,N<:Any,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N<:Any}}},DenseArray},I<:Tuple{Vararg{Union{Base.AbstractCartesianIndex,Colon,Int64,Range{Int64}},N<:Any}},L<:Any}}, x::Union{Base.ReshapedArray{S,1,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N<:Any}}},DenseArray{S,1},SubArray{S,1,A<:Union{Base.ReshapedArray{T<:Any,N<:Any,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N<:Any}}},DenseArray},I<:Tuple{Vararg{Union{Base.AbstractCartesianIndex,Colon,Int64,Range{Int64}},N<:Any}},L<:Any}}) at linalg/matmul.jl:79
@which(x + 1) = +(A::AbstractArray, x::Number) at arraymath.jl:94
@which(x * 1) = *(A::AbstractArray, B::Number) at abstractarraymath.jl:97

while

m = Model()
x = @variable(m, [1: 3])
A = rand(3, 3)

@show @which sum(x)
@show @which diagm(x)
@show @which norm(x)
@show @which A * x
@show @which x + 1
@show @which x * 1

prints

@which(sum(x)) = sum(j::Array{JuMP.Variable,N<:Any}) at /Users/twan/code/julia/MixedIntegerExperiments/v0.5/JuMP/src/operators.jl:279
@which(diagm(x)) = diagm(x::Array{JuMP.Variable,1}) at /Users/twan/code/julia/MixedIntegerExperiments/v0.5/JuMP/src/operators.jl:351
@which(norm(x)) = norm{V<:JuMP.AbstractJuMPScalar}(x::Array{V,N<:Any}) at /Users/twan/code/julia/MixedIntegerExperiments/v0.5/JuMP/src/norms.jl:44
@which(A * x) = *{T<:Union{JuMP.AbstractJuMPScalar,JuMP.GenericNormExpr{2,Float64,JuMP.Variable},JuMP.GenericNorm{P,Float64,JuMP.Variable},JuMP.NonlinearExpression}}(A::Union{Array{T<:Any,2},SparseMatrixCSC}, x::Union{Array{T,1},Array{T,2},SparseMatrixCSC{T,Ti<:Integer}}) at /Users/twan/code/julia/MixedIntegerExperiments/v0.5/JuMP/src/operators.jl:447
@which(x + 1) = +{T<:Union{JuMP.AbstractJuMPScalar,JuMP.GenericNormExpr{2,Float64,JuMP.Variable},JuMP.GenericNorm{P,Float64,JuMP.Variable},JuMP.NonlinearExpression}}(lhs::Union{Array{T,N<:Any},SparseMatrixCSC{T,Ti<:Integer}}, rhs::Number) at /Users/twan/code/julia/MixedIntegerExperiments/v0.5/JuMP/src/operators.jl:507
@which(x * 1) = *{T<:Union{JuMP.AbstractJuMPScalar,JuMP.GenericNormExpr{2,Float64,JuMP.Variable},JuMP.GenericNorm{P,Float64,JuMP.Variable},JuMP.NonlinearExpression}}(lhs::Array{T,N<:Any}, rhs::Number) at /Users/twan/code/julia/MixedIntegerExperiments/v0.5/JuMP/src/operators.jl:538

(note: this PR in its current form doesn't address all of these).

Personally I don't see much value in putting AbstractArray over no type restriction at all if we're already going to loosen the type beyond what's actually used and has been tested.

The main value I see is that it conveys the intention of the code. Moreover, you still get to put constraints on the dimension and element type, and you need it to avoid ambiguities. However, widening signatures to ::AbstractArray also makes it so that if things don't work for a type that satisfies the requirements of being a (reasonable) AbstractArray subtype, then users get to complain, so I understand your hesitancy.

A (possibly overly) cheap response to your argument is that for some methods, you're not currently testing all possible values of N or T for Array{T, N} either. You couldn't possibly test all possible AbstractArray subtypes, but in my opinion, that doesn't mean that using ::Array instead of ::AbstractArray is necessarily the right thing to do.

src/operators.jl Outdated

###############
# The _multiply!(buf,y,z) adds the results of y*z into the buffer buf. No bounds/size
# checks are performed; it is expected that the caller has done this, has ensured
# that the eltype of buf is appropriate, and has zeroed the elements of buf (if desired).

function _multiply!{T<:JuMPTypes}(ret::Array{T}, lhs::Array, rhs::Array)
function _multiply!{T<:JuMPTypes}(ret::AbstractArray{T}, lhs::Array, rhs::Array)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joehuchette should comment but I don't think this method was designed for AbstractArrays. The built-in fallback should work

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I don't think this part of the code will work with generic abstract arrays as-is.

src/operators.jl Outdated
@@ -571,7 +571,7 @@ end; end

# The following are primarily there for internal use in the macro code for @constraint
for op in [:(+), :(-)]; @eval begin
function $op(lhs::Array{Variable},rhs::Array{Variable})
function $op(lhs::AbstractArray{Variable},rhs::AbstractArray{Variable})
(sz = size(lhs)) == size(rhs) || error("Incompatible sizes for $op: $sz $op $(size(rhs))")
ret = Array{AffExpr}(sz)
for I in eachindex(ret)
Copy link
Member

@mlubin mlubin Feb 25, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned about the validity of using eachindex from ret to index into lhs and rhs when these could be different types.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think eachindex(ret,lhs,rhs) should work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, forgot this one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed (here and in a number of other locations).

src/print.jl Outdated
@@ -44,7 +44,7 @@ end

# TODO: get rid of this! This is only a helper, and should be Base.values
# (and probably live there, as well)
_values(x::Array) = x
_values(x::AbstractArray) = x
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is this used and why does it need to change?

src/print.jl Outdated
@@ -103,7 +103,7 @@ math(s,mathmode) = mathmode ? s : "\$\$ $s \$\$"
# helper to look up corresponding JuMPContainerData
printdata(v::JuMPContainer) = _getmodel(v).varData[v]
getname(x::JuMPContainer) = hasmeta(x, :model) ? printdata(x).name : "__anon__"
function printdata(v::Array{Variable})
function printdata(v::AbstractArray{Variable})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you redefining printing for AbstractArray{Variable}? If not, this doesn't need to be touched

@mlubin
Copy link
Member

mlubin commented Feb 25, 2017

I could change the tests to use OffsetArrays, but I do foresee problems on 0.5, where support for such arrays is only partial at the moment.

It's the opposite. Any code in JuMP that's calling size on (now) an AbstractArray with custom indices is probably incorrect. This will throw an error on 0.5 which is fine. On 0.6 it may silently do the wrong thing which is not okay.

@mlubin
Copy link
Member

mlubin commented Feb 25, 2017

One option is to follow the suggestion in the docs:

Note that if you don’t want to be bothered supporting arrays with non-1 indexing, you can add the following line:

@assert all(x->isa(x, Base.OneTo), indices(A))

at the top of any function.

@tkoolen
Copy link
Contributor Author

tkoolen commented Feb 27, 2017

I'll push some more changes tonight.

@tkoolen
Copy link
Contributor Author

tkoolen commented Feb 28, 2017

New round of changes, please take a look.

src/operators.jl Outdated
@@ -277,7 +277,8 @@ Base.sum(j::JuMPDict) = sum(values(j.tupledict))
Base.sum(j::JuMPArray{Variable}) = AffExpr(vec(j.innerArray), ones(length(j.innerArray)), 0.0)
Base.sum(j::JuMPDict{Variable}) = AffExpr(collect(values(j.tupledict)), ones(length(j.tupledict)), 0.0)
Base.sum(j::Array{Variable}) = AffExpr(vec(j), ones(length(j)), 0.0)
function Base.sum{T<:GenericAffExpr}(affs::Array{T})
Base.sum(j::AbstractArray{Variable}) = sum([j...]) # to handle non-one-indexed arrays.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless things have changed recently, splatting large arrays can cause big performance issues when the dimension is large, e.g., into the thousands. collect(j) might be better here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

collect(::OffsetArray) returns a copy of the OffsetArray (with the same indices). I found it surprisingly hard to convert from an OffsetArray to a regular Array. Maybe I'm missing something.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about [v[i] for i in eachindex(v)] ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

D'oh. Yeah, that's better.

@@ -569,18 +570,6 @@ end; end
(/){T<:JuMPTypes}(lhs::SparseMatrixCSC{T}, rhs::Number) =
SparseMatrixCSC(lhs.m, lhs.n, copy(lhs.colptr), copy(lhs.rowval), lhs.nzval ./ rhs)

# The following are primarily there for internal use in the macro code for @constraint
for op in [:(+), :(-)]; @eval begin
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these methods now covered by built-ins in Julia?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, thanks for checking

@mlubin
Copy link
Member

mlubin commented Feb 28, 2017

Getting closer. Please correct the travis failures as well. (Appveyor failures are expected.)

@tkoolen
Copy link
Contributor Author

tkoolen commented Feb 28, 2017

More fixes.

Two random comments on getting things to work for OffsetArrays, for future reference:

src/affexpr.jl Outdated
ret = Array{LinConstrRef}(size(v))
for I in eachindex(v)
ret[I] = addconstraint(m, v[I])
function addVectorizedConstraint(m::Model, v::AbstractArray{LinearConstraint})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm somewhat uncertain if this method should be changed. Is there a compelling reason to pass a sparse matrix of linear constraints, for example?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without changing this method, the following fails:

m = Model()
v = @variable(m, [1:3])
x = OffsetArray(v, -3)
@constraint(m, x .== 0)

because there is no Convert method that constructs an Array from an OffsetArray (a conscious design decision in OffsetArrays).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem like the right fix. I think constructconstraint!(x::Array, sense::Symbol) = map(c->constructconstraint!(c,sense), x) should be changed to return a flat Vector of constraints. Then there's no need to touch addVectorizedConstraint.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this also doesn't work:

t = OffsetArray(rand(3), -3)
t .== 0

I'm not sure this is desirable behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, will change. @joehuchette, that seems like a bug in Base.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@tkoolen
Copy link
Contributor Author

tkoolen commented Feb 28, 2017

Travis failures with no output seem to be because Amazon S3 is having issues: https://www.traviscistatus.com/incidents/hmwq9yy5dh9d.

@mlubin
Copy link
Member

mlubin commented Feb 28, 2017

I'll restart the builds when travis stabilizes a bit.

@@ -277,7 +277,8 @@ Base.sum(j::JuMPDict) = sum(values(j.tupledict))
Base.sum(j::JuMPArray{Variable}) = AffExpr(vec(j.innerArray), ones(length(j.innerArray)), 0.0)
Base.sum(j::JuMPDict{Variable}) = AffExpr(collect(values(j.tupledict)), ones(length(j.tupledict)), 0.0)
Base.sum(j::Array{Variable}) = AffExpr(vec(j), ones(length(j)), 0.0)
function Base.sum{T<:GenericAffExpr}(affs::Array{T})
Base.sum(j::AbstractArray{Variable}) = sum([j[i] for i in eachindex(j)]) # to handle non-one-indexed arrays.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as vec(::AbstractArray{Variable}) will reliably work, I think the definition above should work fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried that first, but AffExpr is a type alias of GenericAffExpr{Float64,Variable}, so that directly calls the inner constructor of AffExpr, which expects precisely a Vector{Variable}. I tried adding an inner constructor to GenericAffExpr that takes AbstractArrays (+ additional outer constructors), but ran into some more issues that I don't remember anymore. Basically, it was going to be a pretty big change, whereas sum seemed to be the only current case where handling AbstractVectors was needed when constructing AffExprs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, thanks for the explanation.

@joehuchette
Copy link
Contributor

Modulo my iffy-ness about changing addVectorizedConstraint, this PR looks good to me.

src/macros.jl Outdated
@@ -305,20 +305,21 @@ function constructconstraint!(normexpr::SOCExpr, sense::Symbol)
end

constructconstraint!(x::Array, sense::Symbol) = map(c->constructconstraint!(c,sense), x)
constructconstraint!(x::AbstractArray, sense::Symbol) = constructconstraint!([x[i] for i in eachindex(x)], sense::Symbol)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the second ::Symbol shouldn't be necessary

@joehuchette
Copy link
Contributor

Looks good to merge to me. @mlubin?

@mlubin
Copy link
Member

mlubin commented Mar 1, 2017

I'm having trouble viewing the status of travis.

@mlubin mlubin merged commit 0b4bc7b into jump-dev:master Mar 1, 2017
@mlubin
Copy link
Member

mlubin commented Mar 1, 2017

@tkoolen, thanks for addressing all of our concerns. Looking forward to seeing the uses of AxisArrays.

@tkoolen tkoolen deleted the abstractarrays branch March 1, 2017 18:52
@tkoolen
Copy link
Contributor Author

tkoolen commented Mar 1, 2017

Awesome, thanks guys!

@joehuchette
Copy link
Contributor

Thanks for this, @tkoolen!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants