Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make InexactError more informative #20005

Merged
merged 2 commits into from
Jul 11, 2017
Merged

Make InexactError more informative #20005

merged 2 commits into from
Jul 11, 2017

Conversation

timholy
Copy link
Member

@timholy timholy commented Jan 12, 2017

An initial stab at #18521. Doesn't work---the C-code part isn't right. I haven't put a lot of time into this, but if those who spend a lot of time on the compiler see some obvious ways to fix the problems, I'm all ears.

Copy link
Member Author

@timholy timholy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pathetic 😄

src/cgutils.cpp Outdated
@@ -853,6 +853,31 @@ static Value *emit_bounds_check(const jl_cgval_t &ainfo, jl_value_t *ty, Value *
return im1;
}

static void emit_inexacterror_unless(Value *cond, jl_value_t *type, jl_value_t *x, jl_codectx_t *ctx)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Triggers a segfault while building in int.jl.

CreateTrunc(builder.
CreateLShr(x, ConstantInt::get(t, t->getPrimitiveSizeInBits()-1)),
T_int1),
xtyp, jl_false, ctx);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just plucked jl_false out of thin air, desperate for an object with "type" jl_value_t*.

@@ -904,7 +904,7 @@ JL_DLLEXPORT jl_value_t *jl_check_top_bit(jl_value_t *a)
if (!jl_is_bitstype(ty))
jl_error("check_top_bit: value is not a bitstype");
if (signbitbyte(jl_data_ptr(a), jl_datatype_size(ty)))
jl_throw(jl_inexact_exception);
jl_throw(jl_new_struct(jl_inexacterror_type, jl_any_type, jl_any_type, a));
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely unsure what should go here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this depend on how do you want to interpret this? It should be consistent with the one in codegen at least.

@kshyatt kshyatt added error handling Handling of exceptions by Julia or the user types and dispatch Types, subtyping and method dispatch labels Jan 12, 2017
src/cgutils.cpp Outdated
#else
builder.CreateCall3(prepare_call(jlinexacterror_func),
fname_val, literal_pointer_val(type),
boxed(x, ctx, false));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work since fname_val is not a jl_value_t*.

Since all three values are known at compile time, might as well just construct the exception at compile time and throw that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, given how you use it in julia code, I think you need more runtime variables here (especially x won't always be known at compile time). In that case you should pass in x as a Value* and use that in the call.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be obvious to you, but how do I extract the values? Or conversely, how does julia code interact with a Value*?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The jl_value_t* is a julia object, in codegen they generally represent the expression so they should not be used to construct the error. You need to construct the error using a julia value at runtime, which is created by evaluating the expression represented by the jl_value_t* at runtime, which will be represent as a Value*. The Value* should also be a boxed jl_value_t* rather than a unboxed one. This information is kept in the jl_cgval_t, which is a wrapper around Value* that also includes information about which type of representation it holds.

For the generic_trunc one, you can get the jl_cgval_t if you inline the auto_unbox(x). For the emit_untyped_intrinsic case, you can construct the jl_cgval_t from the Value *x and the jl_value_t *xtyp, however, it seems better to make the caller to pass in the jl_cgval_t directly since it's always available.

From jl_cgval_t, boxed should give you the right pointer to be passed to the C function that constructs the exception.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very helpful @yuyichao. Sorry to be dense, but I'm not finding an auto_unbox(x), and auto_unbox(x, ctx) seems to return a Value* not a jl_cgval_t. I'm pushing the current state of the code so you can see more explicitly what I'm trying.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean instead of using the Value *auto_unbox(jl_value_t *x, jl_codectx_t *ctx) which emits the expression and give you an unboxed value, you should "inline" that function, i.e. do

    jl_cgval_t v = emit_expr(x, ctx);
    return auto_unbox(v, ctx);

to first emit that expression, getting the jl_cgval_t and then use the other signature of auto_unbox to get the Value* that's used in the original version of the code.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again. I made a bit more progress but still don't really get it (I know next to nothing about LLVM, or how you introspect into a Value *, etc, ), and steep learning curve + overdue on other duties means I won't get back to this for a little while. I pushed my latest version of this. If there's anyone eager to run with it, it's yours for the taking, otherwise I'll get back to it eventually.

@timholy
Copy link
Member Author

timholy commented May 22, 2017

OK, I'm back to this. This time I think I'm really close, but the codegen part still isn't right: once it starts building inference.jl, memory spirals totally out of control and nearly brings my machine to a halt. Any tips would be most appreciated. You can ignore the julia part for now and just focus on src/, most likely src/intrinsics.cpp.

I should probably explain that, for ease of development, I decided to first create a new type, InvalidValueError, for the new error parametrization, and then use it in place of calls to InexactError. That let me have a buildable system at intermediate stages of development. (Assuming people prefer the original name, I'll search-replace InvalidValueError->InexactError at the end. Though perhaps there's some reason to consider a rename.)

@timholy timholy force-pushed the teh/better_errors branch 2 times, most recently from 71da8c7 to fd4db78 Compare May 31, 2017 19:57
@timholy
Copy link
Member Author

timholy commented May 31, 2017

I think I got a little farther (finally noticed the existence jl_dump_llvm_value, I should have read https://docs.julialang.org/en/latest/devdocs/debuggingtips.html#gdb-debugging-tips-1 more carefully). What gdb shows seems, to me, promising:

Reading symbols from /home/tim/src/julia-1.0/usr/bin/julia-debug...done.
(gdb) b generic_trunc_exception
Function "generic_trunc_exception" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (generic_trunc_exception) pending.
(gdb) run
Starting program: /home/tim/src/julia-1.0/usr/bin/julia-debug -C native --output-ji /home/tim/src/julia-1.0/usr/lib/julia/inference.ji --startup-file=no -g0 -O0 coreimg.jl
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff3fe4700 (LWP 14362)]
essentials.jl
ctypes.jl
generator.jl
reflection.jl
options.jl
promotion.jl
tuple.jl
pair.jl
traits.jl
range.jl
expr.jl
error.jl
bool.jl
number.jl
int.jl
operators.jl
pointer.jl
indices.jl
array.jl
abstractarray.jl
hashing.jl
nofloat_hashing.jl
reduce.jl
bitarray.jl
intset.jl
associative.jl
docs/core.jl
inference.jl

Thread 1 "julia-debug" hit Breakpoint 1, generic_trunc_exception (to=0x626650, x=0xce2c28, ctx=0x7fffffff9380, jlto=0xb56150)
    at /home/tim/src/julia-1.0/src/intrinsics.cpp:505
505     {
(gdb) call jl_dump_llvm_type(to)
i32

(gdb) call jl_dump_llvm_value(x)
  %12 = load i64, i64* %11, align 16, !tbaa !5
(gdb) call jl_dump_llvm_value(jlto)
i32 undef
(gdb) f 2
#2  0x00007ffff76a5fe5 in generic_cast (f=JL_I::checked_trunc_sint, 
    generic=0x7ffff76a6307 <generic_trunc_schecked(llvm::Type*, llvm::Value*, jl_codectx_t*, llvm::Value*)>, argv=0x7fffffff86b0, ctx=0x7fffffff9380, 
    toint=true, fromint=true) at /home/tim/src/julia-1.0/src/intrinsics.cpp:495
495         Value *ans = generic(to, from, ctx, UndefValue::get(julia_type_to_llvm(jlto)));
(gdb) bt
#0  generic_trunc_exception (to=0x626650, x=0xce2c28, ctx=0x7fffffff9380, jlto=0xb56150) at /home/tim/src/julia-1.0/src/intrinsics.cpp:505
#1  0x00007ffff76a63bc in generic_trunc_schecked (to=0x626650, x=0xce2c28, ctx=0x7fffffff9380, jlto=0xb56150)
    at /home/tim/src/julia-1.0/src/intrinsics.cpp:531
#2  0x00007ffff76a5fe5 in generic_cast (f=JL_I::checked_trunc_sint, 
    generic=0x7ffff76a6307 <generic_trunc_schecked(llvm::Type*, llvm::Value*, jl_codectx_t*, llvm::Value*)>, argv=0x7fffffff86b0, ctx=0x7fffffff9380, 
    toint=true, fromint=true) at /home/tim/src/julia-1.0/src/intrinsics.cpp:495
#3  0x00007ffff76a7edb in emit_intrinsic (f=JL_I::checked_trunc_sint, args=0x7ffff0622f90, nargs=2, ctx=0x7fffffff9380)
    at /home/tim/src/julia-1.0/src/intrinsics.cpp:797
#4  0x00007ffff76b4ec2 in emit_call (ex=0x7ffff06404b0, ctx=0x7fffffff9380) at /home/tim/src/julia-1.0/src/codegen.cpp:3434
#5  0x00007ffff76b9472 in emit_expr (expr=0x7ffff06404b0, ctx=0x7fffffff9380) at /home/tim/src/julia-1.0/src/codegen.cpp:4139
#6  0x00007ffff76c499e in emit_function (lam=0x7ffff062eb90, src=0x7ffff04dbbd0, world=1098, declarations=0x7ffff062ebf0, 
    params=0x7ffff7aab8c0 <jl_default_cgparams>) at /home/tim/src/julia-1.0/src/codegen.cpp:6095
#7  0x00007ffff769023c in jl_compile_linfo (pli=0x7fffffff9a68, src=0x7ffff04dbbd0, world=1098, params=0x7ffff7aab8c0 <jl_default_cgparams>)
    at /home/tim/src/julia-1.0/src/codegen.cpp:1256
#8  0x00007ffff7690e03 in jl_generate_fptr (li=0x7ffff062eb10, _F=0x0, world=2759) at /home/tim/src/julia-1.0/src/codegen.cpp:1496
#9  0x00007ffff75e8023 in jl_compile_method_internal (fptr=0x7fffffff9b50, meth=0x7ffff062eb10) at /home/tim/src/julia-1.0/src/julia_internal.h:335
#10 0x00007ffff75e8197 in jl_call_method_internal (meth=0x7ffff062eb10, args=0x7fffffff9be0, nargs=3) at /home/tim/src/julia-1.0/src/julia_internal.h:363
#11 0x00007ffff75ee885 in jl_apply_generic (args=0x7fffffff9be0, nargs=3) at /home/tim/src/julia-1.0/src/gf.c:1923
#12 0x00007fffee270e1c in japi1_cconvert_442 ()
#13 0x00007ffff75e80ce in jl_call_fptr_internal (fptr=0x7fffffff9ce0, meth=0x7fffefd7e710, args=0x7fffffff9f38, nargs=3)
    at /home/tim/src/julia-1.0/src/julia_internal.h:348
#14 0x00007ffff75e81df in jl_call_method_internal (meth=0x7fffefd7e710, args=0x7fffffff9f38, nargs=3) at /home/tim/src/julia-1.0/src/julia_internal.h:367
#15 0x00007ffff75ee885 in jl_apply_generic (args=0x7fffffff9f38, nargs=3) at /home/tim/src/julia-1.0/src/gf.c:1923
#16 0x00007fffee2859b5 in japi1.methods_by_ftype_756 ()
#17 0x00007ffff75e80ce in jl_call_fptr_internal (fptr=0x7fffffffa620, meth=0x7ffff062d110, args=0x7fffffffa6b0, nargs=6)
    at /home/tim/src/julia-1.0/src/julia_internal.h:348
#18 0x00007ffff75e81df in jl_call_method_internal (meth=0x7ffff062d110, args=0x7fffffffa6b0, nargs=6) at /home/tim/src/julia-1.0/src/julia_internal.h:367
#19 0x00007ffff75ee885 in jl_apply_generic (args=0x7fffffffa6b0, nargs=6) at /home/tim/src/julia-1.0/src/gf.c:1923
#20 0x00007fffee284278 in japi1.methods_by_ftype_751 ()
#21 0x00007ffff75e80ce in jl_call_fptr_internal (fptr=0x7fffffffa890, meth=0x7ffff062cc90, args=0x7fffffffab08, nargs=4)
    at /home/tim/src/julia-1.0/src/julia_internal.h:348
#22 0x00007ffff75e81df in jl_call_method_internal (meth=0x7ffff062cc90, args=0x7fffffffab08, nargs=4) at /home/tim/src/julia-1.0/src/julia_internal.h:367
#23 0x00007ffff75ee885 in jl_apply_generic (args=0x7fffffffab08, nargs=4) at /home/tim/src/julia-1.0/src/gf.c:1923
#24 0x00007fffee280e15 in japi1_anonymous_677 ()
#25 0x00007ffff76240e1 in jl_call_fptr_internal (fptr=0x7fffffffb890, meth=0x7ffff058a310, args=0x7fffffffb8f0, nargs=1)
    at /home/tim/src/julia-1.0/src/julia_internal.h:348
#26 0x00007ffff76241f2 in jl_call_method_internal (meth=0x7ffff058a310, args=0x7fffffffb8f0, nargs=1) at /home/tim/src/julia-1.0/src/julia_internal.h:367
#27 0x00007ffff76265fa in jl_toplevel_eval_flex (e=0x7ffff0614cf0, fast=1, expanded=1) at /home/tim/src/julia-1.0/src/toplevel.c:587
#28 0x00007ffff75f911c in jl_parse_eval_all (fname=0x7fffef0afdf8 "inference.jl", content=0x0, contentlen=0) at /home/tim/src/julia-1.0/src/ast.c:873
#29 0x00007ffff7626769 in jl_load (fname=0x7fffef0afdf8 "inference.jl") at /home/tim/src/julia-1.0/src/toplevel.c:614
#30 0x00007ffff76267aa in jl_load_ (str=0x7fffef0afdf0) at /home/tim/src/julia-1.0/src/toplevel.c:621
#31 0x00007fffee258295 in japi1_include_57 ()
#32 0x00007ffff75e80ce in jl_call_fptr_internal (fptr=0x7fffffffbda0, meth=0x7fffef007f10, args=0x7fffffffbe30, nargs=2)
    at /home/tim/src/julia-1.0/src/julia_internal.h:348
#33 0x00007ffff75e81df in jl_call_method_internal (meth=0x7fffef007f10, args=0x7fffffffbe30, nargs=2) at /home/tim/src/julia-1.0/src/julia_internal.h:367
#34 0x00007ffff75ee885 in jl_apply_generic (args=0x7fffffffbe30, nargs=2) at /home/tim/src/julia-1.0/src/gf.c:1923
#35 0x00007ffff7605518 in do_call (args=0x7ffff028c2f0, nargs=2, s=0x0) at /home/tim/src/julia-1.0/src/interpreter.c:75
#36 0x00007ffff760645b in eval (e=0x7ffff0281c30, s=0x0) at /home/tim/src/julia-1.0/src/interpreter.c:242
#37 0x00007ffff76051ab in jl_interpret_toplevel_expr (e=0x7ffff0281c30) at /home/tim/src/julia-1.0/src/interpreter.c:34
#38 0x00007ffff7626552 in jl_toplevel_eval_flex (e=0x7ffff0281c30, fast=1, expanded=1) at /home/tim/src/julia-1.0/src/toplevel.c:575
#39 0x00007ffff7624d39 in jl_eval_module_expr (ex=0x7fffef0afe70) at /home/tim/src/julia-1.0/src/toplevel.c:203
---Type <return> to continue, or q <return> to quit--- q
Quit
(gdb) f 5
#5  0x00007ffff76b9472 in emit_expr (expr=0x7ffff06404b0, ctx=0x7fffffff9380) at /home/tim/src/julia-1.0/src/codegen.cpp:4139
4139            jl_cgval_t res = emit_call(ex, ctx);
(gdb) call jl_(ex)
Expr(:call, Core.Inference.checked_trunc_sint, Int32, SlotNumber(id=3))::Any

Yet the memory still blows up (without hitting my breakpoint a second time) to about 5GB, then I kill it.

I'm guessing it's the last argument that's problematic, where I'm trying to get a Value* corresponding to the julia type that we're attempting to convert to, so I can stuff it into the InvalidValueError. I suspect I'm trying to put the actual LLVM type (int32) in instead.

@timholy
Copy link
Member Author

timholy commented May 31, 2017

I should add that, for this call, targ.V is NULL.

@timholy
Copy link
Member Author

timholy commented May 31, 2017

After rereading @yuyichao's comments above, I tried this diff:

diff --git a/src/intrinsics.cpp b/src/intrinsics.cpp
index 164c114..1730f48 100644
--- a/src/intrinsics.cpp
+++ b/src/intrinsics.cpp
@@ -492,7 +492,8 @@ static jl_cgval_t generic_cast(
     if (!to || !vt)
         return emit_runtime_call(f, argv, 2, ctx);
     Value *from = emit_unbox(vt, v, v.typ);
-    Value *ans = generic(to, from, ctx, UndefValue::get(julia_type_to_llvm(jlto)));
+    Value *jltov = boxed(emit_expr(jlto, ctx), ctx);
+    Value *ans = generic(to, from, ctx, jltov);
     return mark_julia_type(ans, false, jlto, ctx);
 }

and got this (from within generic_trunc_exception):

(gdb) call jl_dump_llvm_value(jlto)
  %13 = load i8**, i8*** @"+Main.Core.Int32227", !tbaa !2

which seems more promising. But it still blows up. I stepped a couple of lines forward and got this:

(gdb) n
507         Type *lt = julia_type_to_llvm((jl_value_t*)jl_invalidvalueerror_type);
(gdb) n
508         Value *strct = UndefValue::get(lt);
(gdb) n
509         strct = builder.CreateInsertElement(strct,
(gdb) call jl_dump_llvm_value(strct)
i8** undef
(gdb) call jl_dump_llvm_type(lt)
i8**

(gdb) p lt
$1 = (llvm::Type *) 0x718d20
(gdb) p strct
$2 = (llvm::Value *) 0xc3f940

and then the next line, the

 strct = builder.CreateInsertElement(strct,
                                        ConstantDataArray::getString(builder.getContext(),
                                                                     StringRef("convert")),
                                        uint64_t(0));

is where it hangs.

@timholy
Copy link
Member Author

timholy commented Jun 1, 2017

Got rid of the memory explosion. Now I'm just getting

Thread 1 "julia-debug" received signal SIGSEGV, Segmentation fault.
0x00007ffff764ebcf in gc_try_setmark (obj=0x747265766e6f63, nptr=0x36e7f10, ptag=0x7ffffffded88, 
    pbits=0x7ffffffded53 "\001\377\003") at /home/tim/src/julia-1.0/src/gc.c:1374
1374        uintptr_t tag = o->header;

while building inference.jl. Unfortunately call jl_(obj) doesn't work.

@timholy
Copy link
Member Author

timholy commented Jun 2, 2017

This version builds the exception in codegen and successfully throws the right type from

julia> convert(Int8, 200)
ERROR: InvalidValueError: convert(Int8, 200::Int64)
Stacktrace:
 [1] convert(::Type{Int8}, ::Int64) at ./int.jl:328

But it's still missing the right exception type from check_top_bit, so it doesn't pass the tests as I've got it set up. If I enable the topbit_exception code I get a segfault during build (but after successfully building coreimg.jl).

At this point I've put in more time than I can afford, so barring insights from others I may have to abandon it.

@timholy
Copy link
Member Author

timholy commented Jul 8, 2017

@nanosoldier runbenchmarks(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Something went wrong when running your job:

NanosoldierError: failed to run benchmarks against primary commit: failed process: Process(`sudo cset shield -e su nanosoldier -- -c ./benchscript.sh`, ProcessExited(1)) [1]

Logs and partial data can be found here
cc @jrevels

@timholy
Copy link
Member Author

timholy commented Jul 8, 2017

Nanosoldier seems to have encountered some kind of problem. However, I've run the benchmark suite locally and it seems fine. I'll try again here, and if necessary again tomorrow, but if neither works I'll probably merge anyway.

@nanosoldier runbenchmarks(ALL, vs = ":master")

@tkelman
Copy link
Contributor

tkelman commented Jul 8, 2017

that could be a sign that a from scratch build is somehow broken? CI won't test that

much more likely, this breaks a package or the benchmark suite that nanosoldier depends on. please don't merge if this breaks benchmarking

T::Type
val

InexactError(f::Symbol, T::ANY, val::ANY) = (@_noinline_meta; new(f, T, val))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

about to be deprecated, depending on the order things get merged in

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's about to be deprecated?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ANY

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, you mean ANY. Yes, very exciting.

@timholy
Copy link
Member Author

timholy commented Jul 8, 2017

that could be a sign that a from scratch build is somehow broken? CI won't test that

Is there a good way to test that short of cloning the repo from scratch?

@tkelman
Copy link
Contributor

tkelman commented Jul 8, 2017

this needs fallback behavior of some kind for calls to InexactError() in packages, doesn't it? either a deprecation or defaults?

@timholy
Copy link
Member Author

timholy commented Jul 8, 2017

much more likely, this breaks a package or the benchmark suite that nanosoldier depends on. please don't merge if this breaks benchmarking

I've run the entire suite locally several times. But I can hold out for someone to investigate. (Hoping to wrap up my 1.0 contributions by the end of the weekend, but waiting doesn't cost much.)

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@timholy
Copy link
Member Author

timholy commented Jul 9, 2017

@nanosoldier runbenchmarks(ALL, vs = ":master")

@timholy
Copy link
Member Author

timholy commented Jul 9, 2017

Locally I'm seeing about 12 regressions compared to master. Only two of these seem real, and they're really just one because they are x*y and y*x of the following:

x = Complex{Int}(2)
y = Complex{UInt}(2)

The performance change is pretty serious, @btime shows 2.379 ns on master and 7.668 ns on this PR. Surprisingly, AFAICT it comes down to the registers assigned by LLVM. (EDIT: Nope, it was just an inlining issue.)

But this is the only case I'm worried about. Of the rest:

  • a bitshift change occurs because the test was being reduced to returning a constant, in the wake of Decide inline-worthiness based on a more nuanced cost model #22210; that test should get an @noinline (I've checked that adding one makes the two perform similarly)
  • in every other case, while we're worse than current master, we're nevertheless still equal to or better than 0.6. For example:
    0.6:
julia> run(BaseBenchmarks.SUITE[["array", "index", ("sumrange", "BitArray{2}")]])
BenchmarkTools.Trial: 
  memory estimate:  109.38 KiB
  allocs estimate:  2000
  --------------
  minimum time:     138.231 μs (0.00% GC)
  median time:      146.327 μs (0.00% GC)
  mean time:        184.556 μs (20.24% GC)
  maximum time:     8.005 ms (97.58% GC)
  --------------
  samples:          5384
  evals/sample:     1

Master:

julia> run(BaseBenchmarks.SUITE[["array", "index", ("sumrange", "BitArray{2}")]])
BenchmarkTools.Trial: 
  memory estimate:  101.56 KiB
  allocs estimate:  1500
  --------------
  minimum time:     36.307 μs (0.00% GC)
  median time:      39.758 μs (0.00% GC)
  mean time:        75.239 μs (46.52% GC)
  maximum time:     10.656 ms (99.56% GC)
  --------------
  samples:          10000
  evals/sample:     1

This PR:

julia> run(BaseBenchmarks.SUITE[["array", "index", ("sumrange", "BitArray{2}")]])
BenchmarkTools.Trial: 
  memory estimate:  109.38 KiB
  allocs estimate:  2000
  --------------
  minimum time:     113.097 μs (0.00% GC)
  median time:      118.603 μs (0.00% GC)
  mean time:        159.162 μs (24.01% GC)
  maximum time:     14.161 ms (98.98% GC)
  --------------
  samples:          6189
  evals/sample:     1

So we give up a few of the gains from #22210, but we're still doing pretty well. (It just comes down to the @noinlined call to throw_inexacterror being penalized for 20 cycles of cost which prevents the BitArray constructor from being inlined. Again this could be solved if we knew that call never returned so we didn't have to penalize it.)

For these reasons, I think they only thing we have to consider is the case of Complex{Int}*Complex{UInt}, and whether that's something we need to worry about.

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

NEWS.md Outdated
@@ -139,6 +139,8 @@ Deprecated or removed
* The method `replace(s::AbstractString, pat, r, count)` with `count <= 0` is deprecated
in favor of `replace(s::AbstractString, pat, r, typemax(Int))` ([#22325]).

* `InexactError` now takes arguments: `InexactError(func::Symbol,
type, -3)` now prints as `ERROR: InexactError: func(type, -3)`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add pr reference

@timholy timholy force-pushed the teh/better_errors branch 2 times, most recently from 829216a to fb1d2cf Compare July 10, 2017 20:27
@sbromberger
Copy link
Contributor

sbromberger commented Nov 27, 2017

Hi, sorry to unbury this one, but I'm puzzled by this. In 0.6.1, this appears to have been merged, but I'm not getting the output I thought I'd get:

julia> methods(InexactError)
# 2 methods for generic function "(::Type)":
InexactError() in Core at boot.jl:208
(::Type{T})(arg) where T in Base at sysimg.jl:77

julia> using Compat

julia> methods(InexactError)
# 3 methods for generic function "(::Type)":
InexactError() in Core at boot.jl:208
InexactError(name::Symbol, T, val) in Compat at /Users/bromberger1/.julia/v0.6/Compat/src/Compat.jl:585
(::Type{T})(arg) where T in Base at sysimg.jl:77

julia> throw(InexactError(:sum, Int, -3))
ERROR: InexactError()

I was hoping that I'd get some sort of additional output with that last throw. I'm also confused as to why the new constructor requires Compat.jl. I'm obviously misunderstanding something.

EDIT: and I just realized that this probably wasn't backported to 0.6. Too bad.

@timholy
Copy link
Member Author

timholy commented Nov 27, 2017

There's a "trivial" backport in Compat, to ensure that if packages use the 3-arg constructor it won't throw an error. But it just drops the 3 arguments and behaves the way that it always did on 0.6. The 0.7 behavior involves a change in a type definition, and that is intrinsically not backportable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
error handling Handling of exceptions by Julia or the user types and dispatch Types, subtyping and method dispatch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants