Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-98831: Remove super-instruction definitions, use macro instructions instead #100124

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 18 additions & 8 deletions Python/bytecodes.c
Original file line number Diff line number Diff line change
Expand Up @@ -86,12 +86,14 @@ static PyObject *exit_func, *lasti, *val, *retval, *obj, *iter;
static size_t jump;
// Dummy variables for cache effects
static _Py_CODEUNIT when_to_jump_mask, invert, counter, index, hint;
static _Py_CODEUNIT word;
static uint32_t type_version;
// Dummy opcode names for 'op' opcodes
#define _COMPARE_OP_FLOAT 1003
#define _COMPARE_OP_INT 1004
#define _COMPARE_OP_STR 1005
#define _JUMP_IF 1006
Comment on lines 92 to 95
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should these just be zeroes, too? Or even -1, just to emphasize that they aren't real?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, they should all be zeros. Since the value is never used I'm not sure what the point of -1 would be.

#define JOIN 0

static PyObject *
dummy_func(
Expand All @@ -115,6 +117,14 @@ dummy_func(
switch (opcode) {

// BEGIN BYTECODES //

op(JOIN, (word/1 --)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm. It feels weird to treat the next instruction as a cache entry.

Why don't all macro components generate code to reload the incremented opcodes/opargs unconditionally? That way we wouldn't need JOIN at all. Or are we worried the compiler can't optimize it out when it's unused (or that garbage values could be too confusing)? Or does it mess up something else?

Perhaps there's a clean way of expressing in the DSL whether an op corresponds to a real instruction or not? Then we can just reload the opcode/oparg if that's true, and get rid of JOIN.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm. It feels weird to treat the next instruction as a cache entry.

It's only weird if you think of the instruction stream as alternating instructions and cache entries. In the discussion about registers it was already proposed to use a "cache" entry to encode the operation (ADD, MUL, etc.) for BINARY_OP, so it's really just a stream of variable-length instructions. But yeah, this op is definitely full of internal hackery.

Why don't all macro components generate code to reload the incremented opcodes/opargs unconditionally? That way we wouldn't need JOIN at all. Or are we worried the compiler can't optimize it out when it's unused (or that garbage values could be too confusing)? Or does it mess up something else?

I worry more about where the next oparg would be loaded from. The code generator makes assumption about where next_instr points (for macros it keeps pointing just past the original instruction until the end, for supers it gets bumped for each component).

Perhaps there's a clean way of expressing in the DSL whether an op corresponds to a real instruction or not? Then we can just reload the opcode/oparg if that's true, and get rid of JOIN.

That's not a bad idea -- because we actually have that way: inst vs op. The main downside would then be that there would no longer be a clue in the macro whether it defines a super or not.

#ifndef NDEBUG
opcode = _Py_OPCODE(word);
#endif
oparg = _Py_OPARG(word);
}

inst(NOP, (--)) {
}

Expand Down Expand Up @@ -154,11 +164,11 @@ dummy_func(
SETLOCAL(oparg, value);
}

super(LOAD_FAST__LOAD_FAST) = LOAD_FAST + LOAD_FAST;
super(LOAD_FAST__LOAD_CONST) = LOAD_FAST + LOAD_CONST;
super(STORE_FAST__LOAD_FAST) = STORE_FAST + LOAD_FAST;
super(STORE_FAST__STORE_FAST) = STORE_FAST + STORE_FAST;
super(LOAD_CONST__LOAD_FAST) = LOAD_CONST + LOAD_FAST;
macro(LOAD_FAST__LOAD_FAST) = LOAD_FAST + JOIN + LOAD_FAST;
macro(LOAD_FAST__LOAD_CONST) = LOAD_FAST + JOIN + LOAD_CONST;
macro(STORE_FAST__LOAD_FAST) = STORE_FAST + JOIN + LOAD_FAST;
macro(STORE_FAST__STORE_FAST) = STORE_FAST + JOIN + STORE_FAST;
macro(LOAD_CONST__LOAD_FAST) = LOAD_CONST + JOIN + LOAD_FAST;

inst(POP_TOP, (value --)) {
Py_DECREF(value);
Expand Down Expand Up @@ -2043,7 +2053,7 @@ dummy_func(
}
}
// We're praying that the compiler optimizes the flags manipuations.
super(COMPARE_OP_FLOAT_JUMP) = _COMPARE_OP_FLOAT + _JUMP_IF;
macro(COMPARE_OP_FLOAT_JUMP) = _COMPARE_OP_FLOAT + JOIN + _JUMP_IF;

// Similar to COMPARE_OP_FLOAT
op(_COMPARE_OP_INT, (unused/1, when_to_jump_mask/1, left, right -- jump: size_t)) {
Expand All @@ -2063,7 +2073,7 @@ dummy_func(
_Py_DECREF_SPECIALIZED(right, (destructor)PyObject_Free);
jump = sign_ish & when_to_jump_mask;
}
super(COMPARE_OP_INT_JUMP) = _COMPARE_OP_INT + _JUMP_IF;
macro(COMPARE_OP_INT_JUMP) = _COMPARE_OP_INT + JOIN + _JUMP_IF;

// Similar to COMPARE_OP_FLOAT, but for ==, != only
op(_COMPARE_OP_STR, (unused/1, invert/1, left, right -- jump: size_t)) {
Expand All @@ -2080,7 +2090,7 @@ dummy_func(
assert(invert == 0 || invert == 1);
jump = res ^ invert;
}
super(COMPARE_OP_STR_JUMP) = _COMPARE_OP_STR + _JUMP_IF;
macro(COMPARE_OP_STR_JUMP) = _COMPARE_OP_STR + JOIN + _JUMP_IF;

// stack effect: (__0 -- )
inst(IS_OP) {
Expand Down
83 changes: 64 additions & 19 deletions Python/generated_cases.c.h

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading