Commit Graph

264 Commits

Author SHA1 Message Date
Mark Shannon aea0c586d1
GH-127010: Don't lazily track and untrack dicts (GH-127027) 2024-11-20 16:41:20 +00:00
Brandt Bucher 48c50ff1a2
GH-126892: Reset warmup counters when JIT compiling code (GH-126893) 2024-11-20 08:11:25 -08:00
Hugo van Kemenade 899fdb213d
Revert "GH-126491: GC: Mark objects reachable from roots before doing cycle collection (GH-126502)" (#126983) 2024-11-19 11:25:09 +02:00
Mark Shannon b0fcc2c47a
GH-126491: GC: Mark objects reachable from roots before doing cycle collection (GH-126502)
* Mark almost all reachable objects before doing collection phase

* Add stats for objects marked

* Visit new frames before each increment

* Remove lazy dict tracking

* Update docs

* Clearer calculation of work to do.
2024-11-18 14:31:26 +00:00
Sergey B Kirpichev d9e251223e
gh-103951: enable optimization for fast attribute access on module subclasses (GH-126264)
Co-authored-by: Nicolas Tessore <n.tessore@ucl.ac.uk>
2024-11-15 16:03:38 +08:00
Ken Jin 6293d00e72
gh-120619: Strength reduce function guards, support 2-operand uop forms (GH-124846)
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
2024-11-09 11:35:33 +08:00
Peter Bierma 1371295e67
gh-126366: Fix crash if `__iter__` raises an exception during `yield from` (#126369) 2024-11-05 15:26:36 +05:30
mpage 2e95c5ba3b
gh-115999: Implement thread-local bytecode and enable specialization for `BINARY_OP` (#123926)
Each thread specializes a thread-local copy of the bytecode, created on the first RESUME, in free-threaded builds. All copies of the bytecode for a code object are stored in the co_tlbc array on the code object. Threads reserve a globally unique index identifying its copy of the bytecode in all co_tlbc arrays at thread creation and release the index at thread destruction. The first entry in every co_tlbc array always points to the "main" copy of the bytecode that is stored at the end of the code object. This ensures that no bytecode is copied for programs that do not use threads.

Thread-local bytecode can be disabled at runtime by providing either -X tlbc=0 or PYTHON_TLBC=0. Disabling thread-local bytecode also disables specialization.

Concurrent modifications to the bytecode made by the specializing interpreter and instrumentation use atomics, with specialization taking care not to overwrite an instruction that was instrumented concurrently.
2024-11-04 11:13:32 -08:00
Tomas R. aab58a93ef
gh-118423: Add `INSTRUCTION_SIZE` macro to code generator (GH-125467) 2024-10-29 17:25:05 +00:00
Mark Shannon faa3272fb8
GH-125837: Split `LOAD_CONST` into three. (GH-125972)
* Add LOAD_CONST_IMMORTAL opcode

* Add LOAD_SMALL_INT opcode

* Remove RETURN_CONST opcode
2024-10-29 11:15:42 +00:00
Mark Shannon b61fece852
GH-125868: Fix STORE_ATTR_WITH_HINT specialization (GH-125876) 2024-10-24 11:57:02 +01:00
mpage de5a6c7c7d
gh-121459: Fix a couple of uses of `PyStackRef_FromPyObjectSteal` (#125711)
* Fix usage of PyStackRef_FromPyObjectSteal in CALL_TUPLE_1

This was missed in gh-124894

* Fix usage of PyStackRef_FromPyObjectSteal in _CALL_STR_1

This was missed in gh-124894

* Regenerate code
2024-10-21 11:08:13 -07:00
sobolevn 0c8c665581
gh-125470: Fix warning in `Python/generated_cases.c.h` (#125471)
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>
2024-10-14 23:46:17 +03:00
Mark Shannon 06ca33020e
GH-125323: Convert DECREF_INPUTS_AND_REUSE_FLOAT into a function that takes PyStackRefs. (GH-125439) 2024-10-14 14:18:57 +01:00
Ken Jin 4b358ee647
gh-125323: Remove some unsafe Py_DECREFs in bytecodes.c, replacing them with PyStackRef_CLOSEs (GH-125324) 2024-10-14 09:17:51 +01:00
Mark Shannon c9014374c5
GH-125174: Make immortal objects more robust, following design from PEP 683 (GH-125251) 2024-10-10 18:19:08 +01:00
mpage f978fb4f8d
gh-115999: Refactor `LOAD_GLOBAL` specializations to avoid reloading {globals, builtins} keys (gh-124953)
Each of the `LOAD_GLOBAL` specializations is implemented roughly as:

1. Load keys version.
2. Load cached keys version.
3. Deopt if (1) and (2) don't match.
4. Load keys.
5. Load cached index into keys.
6. Load object from (4) at offset from (5).

This is not thread-safe in free-threaded builds; the keys object may be replaced
in between steps (3) and (4).

This change refactors the specializations to avoid reloading the keys object and
instead pass the keys object from guards to be consumed by downstream uops.
2024-10-09 15:18:25 +00:00
Mark Shannon d1453f60c2
GH-121459: Streamline PyObject* to PyStackRef conversions by disallowing NULL pointers. (GH-124894) 2024-10-07 18:13:04 +01:00
Mark Shannon da071fa3e8
GH-119866: Spill the stack around escaping calls. (GH-124392)
* Spill the evaluation around escaping calls in the generated interpreter and JIT. 

* The code generator tracks live, cached values so they can be saved to memory when needed.

* Spills the stack pointer around escaping calls, so that the exact stack is visible to the cycle GC.
2024-10-07 14:56:39 +01:00
Mark Shannon f55273b3b7
GH-116968: Remove branch from advance_backoff_counter (GH-124469) 2024-10-07 11:46:33 +01:00
Sam Gross 5aa91c56bf
gh-124296: Remove private dictionary version tag (PEP 699) (#124472) 2024-10-01 12:39:56 -04:00
Savannah Ostrowski 65f1237098
GH-123516: Improve JIT memory consumption by invalidating cold executors (GH-124443)
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
2024-09-27 00:35:42 +00:00
Ken Jin 198756b0f6
gh-117376: Fix off-by-ones in conversion functions (GH-124301)
Fix off-by-ones in conversion function
2024-09-26 02:41:07 +08:00
Sam Gross f4997bb3ac
gh-123923: Defer refcounting for `f_funcobj` in `_PyInterpreterFrame` (#124026)
Use a `_PyStackRef` and defer the reference to `f_funcobj` when
possible. This avoids some reference count contention in the common case
of executing the same code object from multiple threads concurrently in
the free-threaded build.
2024-09-24 20:08:18 +00:00
Ken Jin 8810e286fa
gh-121459: Deferred LOAD_GLOBAL (GH-123128)
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Sam Gross <655866+colesbury@users.noreply.github.com>
2024-09-14 00:23:51 +08:00
Sam Gross b2afe2aae4
gh-123923: Defer refcounting for `f_executable` in `_PyInterpreterFrame` (#123924)
Use a `_PyStackRef` and defer the reference to `f_executable` when
possible. This avoids some reference count contention in the common case
of executing the same code object from multiple threads concurrently in
the free-threaded build.
2024-09-12 12:37:06 -04:00
Mark Shannon 4ed7d1d6ac
GH-123996: Explicitly mark 'self_or_null' as an array of size 1 to ensure that it is kept in memory for calls (GH-124003) 2024-09-12 15:32:45 +01:00
Savannah Ostrowski 1fbc118c5d
GH-123545: Remove duplicate Py_DECREF when handling _PyOptimizer_Optimize errors (GH-123546) 2024-09-05 10:56:07 -07:00
Victor Stinner f1a0d96f41
gh-123091: Use _Py_IsImmortalLoose() (#123511)
Use _Py_IsImmortalLoose() in bytesobject.c, typeobject.c
and ceval.c.
2024-09-02 14:25:19 +02:00
Mark Shannon 54a05a4600
GH-123232: Factor BINARY_SLICE and STORE_SLICE to handle stats properly for tier 2. (GH-123381) 2024-08-27 10:49:39 +01:00
Kirill Podoprigora 67f2c84bff
gh-123205: `Python/bytecodes.c`: Fix compiler warning (#123206)
Fix MSVC warning "conversion from '__int64' to 'int'"
2024-08-23 15:35:25 -04:00
Mark Shannon 0b0f7befad
GH-123232: Fix "not specialized" stats (GH-123236) 2024-08-23 10:46:03 +01:00
Donghee Na 297f2e093e
gh-123083: Fix a potential use-after-free in ``STORE_ATTR_WITH_HINT`` (gh-123092) 2024-08-22 23:49:09 +09:00
Mark Shannon a4fd7aa4a6
GH-115776: Allow any fixed sized object to have inline values (GH-123192) 2024-08-21 15:52:04 +01:00
Mark Shannon 1eba8bae92
GH-123185: Check for `NULL` after calling `_PyEvalFramePushAndInit` (GH-123194) 2024-08-21 12:44:56 +01:00
Mark Shannon bb1d30336e
GH-118093: Make `CALL_ALLOC_AND_ENTER_INIT` suitable for tier 2. (GH-123140)
* Convert CALL_ALLOC_AND_ENTER_INIT to micro-ops such that tier 2 supports it

* Allow inexact arguments for CALL_ALLOC_AND_ENTER_INIT.
2024-08-20 16:52:58 +01:00
Mark Shannon c13e7d98fb
GH-118093: Specialize `CALL_KW` (GH-123006) 2024-08-16 17:11:24 +01:00
Brandt Bucher f84754b705
GH-118093: Turn some DEOPT_IFs into EXIT_IFs (GH-122998) 2024-08-14 07:54:42 -07:00
Mark Shannon eec7bdaf01
GH-120024: Remove `CHECK_EVAL_BREAKER` macro. (GH-122968)
* Factor some instructions into micro-ops to isolate CHECK_EVAL_BREAKER for escape analysis

* Eliminate CHECK_EVAL_BREAKER macro
2024-08-14 12:04:05 +01:00
Brandt Bucher 9621a7d017
GH-118093: Handle some polymorphism before requiring progress in tier two (GH-122843) 2024-08-12 12:39:31 -07:00
Sam Gross ab094d1b2b
gh-117139: Replace _PyList_FromArraySteal with stack ref variant (#122830)
This replaces `_PyList_FromArraySteal` with `_PyList_FromStackRefSteal`.
It's functionally equivalent, but takes a `_PyStackRef` array instead of
an array of `PyObject` pointers.

Co-authored-by: Ken Jin <kenjin@python.org>
2024-08-12 14:49:49 -04:00
Sam Gross 736fe4d23e
gh-117139: Fix a few `_PyStackRef` related bugs (#122831)
`BUILD_SET` should use a borrow instead of a steal. The cleanup in `_DO_CALL`
`CONVERSION_FAILED` was incorrect.

Co-authored-by: Ken Jin <kenjin@python.org>
2024-08-12 14:49:33 -04:00
Sam Gross 3e753c689a
gh-118926: Spill deferred references to stack in cases generator (#122748)
This automatically spills the results from `_PyStackRef_FromPyObjectNew`
to the in-memory stack so that the deferred references are visible to
the GC before we make any possibly escaping call.

Co-authored-by: Ken Jin <kenjin@python.org>
2024-08-07 13:23:53 -04:00
Sam Gross 674a50ef2f
gh-117139: Fix an incorrect borrow in bytecodes.c (#122318)
`_PyDict_SetItem_Take2` steals both the key (i.e., `sub`) and the value.
2024-08-07 19:06:19 +05:30
Mark Shannon 4c31791848
GH-120024: Move three more escaping calls out of conditional statements (GH-122734) 2024-08-06 14:14:52 +01:00
Mark Shannon a8be8fc6c4
GH-120024: Refactor code a bit so that escaping calls can be wrapped in spill code in code generator (GH-122693) 2024-08-06 08:40:39 +01:00
Mark Shannon 5bd72912a1
GH-122616: Simplify LOAD_ATTR_WITH_HINT and STORE_ATTR_WITH_HINT (GH-122620) 2024-08-05 16:27:48 +01:00
Mark Shannon 7aca84e557
GH-117224: Move the body of a few large-ish micro-ops into helper functions (GH-122601) 2024-08-02 16:31:17 +01:00
Mark Shannon df13a1821a
GH-118095: Add tier two support for BINARY_SUBSCR_GETITEM (GH-120793) 2024-08-01 16:19:05 -07:00
Mark Shannon a9d56e38a0
GH-122155: Track local variables between pops and pushes in cases generator (GH-122286) 2024-08-01 09:27:26 +01:00
Brandt Bucher 15d4cd0967
GH-116090: Fire RAISE events from _FOR_ITER_TIER_TWO (GH-122413) 2024-07-29 12:17:47 -07:00
Brandt Bucher 64857d849f
GH-122294: Burn in the addresses of side exits (GH-122295) 2024-07-26 09:40:15 -07:00
Mark Shannon 95a73917cd
GH-122029: Break INSTRUMENTED_CALL into micro-ops, so that its behavior is consistent with CALL (GH-122177) 2024-07-26 14:35:57 +01:00
Mark Shannon afb0aa6ed2
GH-121131: Clean up and fix some instrumented instructions. (GH-121132)
* Add support for 'prev_instr' to code generator and refactor some INSTRUMENTED instructions
2024-07-26 12:24:12 +01:00
Brandt Bucher d9efa45d74
GH-118093: Add tier two support for BINARY_OP_INPLACE_ADD_UNICODE (GH-122253) 2024-07-25 14:45:07 -07:00
Brandt Bucher 5f6001130f
GH-118093: Add tier two support for LOAD_ATTR_PROPERTY (GH-122283) 2024-07-25 10:45:28 -07:00
Mark Shannon 5e686ff57d
GH-122034: Add StackRef variants of type checks to reduce the number of PyStackRef_AsPyObjectBorrow calls (GH-122037) 2024-07-25 18:32:43 +01:00
Mark Shannon 2e14a52cce
GH-122160: Remove BUILD_CONST_KEY_MAP opcode. (GH-122164) 2024-07-25 16:24:29 +01:00
Brandt Bucher 794546fd53
GH-118093: Remove invalidated executors from side exits (GH-121885) 2024-07-24 09:16:30 -07:00
Brandt Bucher 7b36b67b1e
GH-118093: Add tier two support to several instructions (GH-121884) 2024-07-18 14:24:58 -07:00
Mark Shannon 3eacfc1a4d
GH-121784: Generate an error during code gen if a variable is marked `unused`, but is used and thus cached in a prior uop. (#121788)
* Reject uop definitions that declare values as 'unused' that are already cached by prior uops

* Track which variables are defined and only load from memory when needed

* Support explicit `flush` in macro definitions. 

* Make sure stack is flushed in where needed.
2024-07-18 12:49:24 +01:00
Mark Shannon 8ad6067bd4
GH-121012: Set index to -1 when list iterators become exhausted in tier 2 (GH-121483) 2024-07-08 14:20:13 +01:00
Sam Gross 8e8d202f55
gh-117139: Add _PyTuple_FromStackRefSteal and use it (#121244)
Avoids the extra conversion from stack refs to PyObjects.
2024-07-02 12:30:14 -04:00
Brandt Bucher 33903c53db
GH-116017: Get rid of _COLD_EXITs (GH-120960) 2024-07-01 13:17:40 -07:00
Ken Jin e6543daf12
gh-117139: Fix a few wrong steals in bytecodes.c (GH-121127)
Fix a few wrong steals in bytecodes.c
2024-06-29 02:14:48 +08:00
Ken Jin 22b0de2755
gh-117139: Convert the evaluation stack to stack refs (#118450)
This PR sets up tagged pointers for CPython.

The general idea is to create a separate struct _PyStackRef for everything on the evaluation stack to store the bits. This forces the C compiler to warn us if we try to cast things or pull things out of the struct directly.

Only for free threading: We tag the low bit if something is deferred - that means we skip incref and decref operations on it. This behavior may change in the future if Mark's plans to defer all objects in the interpreter loop pans out.

This implies a strict stack reference discipline is required. ALL incref and decref operations on stackrefs must use the stackref variants. It is unsafe to untag something then do normal incref/decref ops on it.

The new incref and decref variants are called dup and close. They mimic a "handle" API operating on these stackrefs.

Please read Include/internal/pycore_stackref.h for more information!

---------

Co-authored-by: Mark Shannon <9448417+markshannon@users.noreply.github.com>
2024-06-27 03:10:43 +08:00
Mark Shannon 8f5a01707f
GH-120982: Add stack check assertions to generated interpreter code (GH-120992) 2024-06-25 16:42:29 +01:00
Brandt Bucher a47abdb45d
GH-117062: Make _JUMP_TO_TOP a general absolute jump (GH-120854) 2024-06-24 08:35:10 -07:00
Irit Katriel 65a12c559c
gh-120834: fix type of *_iframe field in _PyGenObject_HEAD declaration (#120835) 2024-06-24 10:23:38 +01:00
Mark Shannon 9cefcc0ee7
GH-120507: Lower the `BEFORE_WITH` and `BEFORE_ASYNC_WITH` instructions. (#120640)
* Remove BEFORE_WITH and BEFORE_ASYNC_WITH instructions.

* Add LOAD_SPECIAL instruction

* Reimplement `with` and `async with` statements using LOAD_SPECIAL
2024-06-18 12:17:46 +01:00
Mark Shannon 274f844830
GH-120619: Clean up `RETURN_VALUE` instruction (GH-120624)
* Rename _POP_FRAME to _RETURN_VALUE as it returns a value as well as popping a frame.

* Remove remaining _POP_FRAMEs
2024-06-17 14:40:11 +01:00
Jelle Zijlstra 9b8611eeea
gh-119180: PEP 649 compiler changes (#119361) 2024-06-11 13:06:49 +00:00
Alyssa Coghlan 3859e09e3d
gh-74929: PEP 667 C API documentation (gh-119379)
* Add docs for new APIs
* Add soft-deprecation notices
* Add What's New porting entries
* Update comments referencing `PyFrame_LocalsToFast()` to mention the proxy instead
* Other related cleanups found when looking for refs to the deprecated APIs
2024-06-01 13:59:35 +10:00
Jelle Zijlstra 80a4e38994
gh-119821: Support non-dict globals in LOAD_FROM_DICT_OR_GLOBALS (#119822)
Support non-dict globals in LOAD_FROM_DICT_OR_GLOBALS

The implementation basically copies LOAD_GLOBAL. Possibly it could be deduplicated,
but that seems like it may get hairy since the two operations have different operands.

This is important to fix in 3.14 for PEP 649, but it's a bug in earlier versions too,
and we should backport to 3.13 and 3.12 if possible.
2024-05-31 14:05:24 -07:00
Brandt Bucher 5cd3ffd6b7
GH-119258: Handle STORE_ATTR_WITH_HINT in tier two (GH-119481) 2024-05-28 12:47:54 -07:00
Brandt Bucher cfcc054dee
GH-119476: Split _CHECK_FUNCTION_VERSION out of _CHECK_FUNCTION_EXACT_ARGS (GH-119510) 2024-05-28 12:45:11 -07:00
Jelle Zijlstra 98e855fcc1
gh-119180: Add LOAD_COMMON_CONSTANT opcode (#119321)
The PEP 649 implementation will require a way to load NotImplementedError
from the bytecode. @markshannon suggested implementing this by converting
LOAD_ASSERTION_ERROR into a more general mechanism for loading constants.

This PR adds this new opcode. I will work on the rest of the implementation
of the PEP separately.

Co-authored-by: Irit Katriel <1055913+iritkatriel@users.noreply.github.com>
2024-05-22 00:46:39 +00:00
Tian Gao 0d9148823d
gh-118414: Fix assertion in YIELD_VALUE when tracing lines or instrs (#118683) 2024-05-06 21:22:59 -07:00
Mark Shannon 1ab6356ebe
GH-118095: Use broader specializations of CALL in tier 1, for better tier 2 support of calls. (GH-118322)
* Add CALL_PY_GENERAL, CALL_BOUND_METHOD_GENERAL and call CALL_NON_PY_GENERAL specializations.

* Remove CALL_PY_WITH_DEFAULTS specialization

* Use CALL_NON_PY_GENERAL in more cases when otherwise failing to specialize
2024-05-04 12:11:11 +01:00
Mark Shannon da2cfc4cb6
GH-113464: Remove the extra jump via `_SIDE_EXIT` in `_EXIT_TRACE` (GH-118545) 2024-05-04 08:50:24 +01:00
Tian Gao 9c14ed0618
gh-107674: Improve performance of `sys.settrace` (GH-117133)
* Check tracing in RESUME_CHECK

* Only change to RESUME_CHECK if not tracing
2024-05-03 19:49:24 +01:00
Mark Shannon 72867c962c
GH-118095: Unify the behavior of tier 2 FOR_ITER branch micro-ops (GH-118420)
* Target _FOR_ITER_TIER_TWO at POP_TOP following the matching END_FOR

* Modify _GUARD_NOT_EXHAUSTED_RANGE, _GUARD_NOT_EXHAUSTED_LIST and _GUARD_NOT_EXHAUSTED_TUPLE so that they also target the POP_TOP following the matching END_FOR
2024-05-02 16:17:59 +01:00
Mark Shannon 67bba9dd0f
GH-117442: Check eval-breaker at start (rather than end) of tier 2 loops (GH-118482) 2024-05-02 13:10:31 +01:00
Mark Shannon 5b05d452cd
GH-118095: Add tier 2 support for YIELD_VALUE (GH-118380) 2024-04-30 11:33:13 +01:00
Mark Shannon ab6eda0ee5
GH-118095: Allow a variant of RESUME_CHECK in tier 2 (GH-118286) 2024-04-29 07:54:05 +01:00
Mark Shannon 3e06c7f719
GH-118095: Add dynamic exit support and FOR_ITER_GEN support to tier 2 (GH-118279) 2024-04-26 18:08:50 +01:00
Mark Shannon f180b31e76
GH-118095: Handle `RETURN_GENERATOR` in tier 2 (GH-118180) 2024-04-25 11:32:47 +01:00
Mark Shannon 83235f7791
GH-115419: Move setting the instruction pointer to error exit stubs (GH-118088) 2024-04-24 14:41:30 +01:00
Mark Shannon a6647d16ab
GH-115480: Reduce guard strength for binary ops when type of one operand is known already (GH-118050) 2024-04-22 13:34:06 +01:00
Dino Viehland 8b541c017e
gh-112075: Make instance attributes stored in inline "dict" thread safe (#114742)
Make instance attributes stored in inline "dict" thread safe on free-threaded builds
2024-04-21 22:57:05 -07:00
Dino Viehland 07525c9a85
gh-116818: Make `sys.settrace`, `sys.setprofile`, and monitoring thread-safe (#116775)
Makes sys.settrace, sys.setprofile, and monitoring generally thread-safe.

Mostly uses a stop-the-world approach and synchronization around the code object's _co_instrumentation_version.  There may be a little bit of extra synchronization around the monitoring data that's required to be TSAN clean.
2024-04-19 14:47:42 -07:00
Mark Shannon 7e6fa5fced
GH-116202: Incorporate invalidation check into _START_EXECUTOR. (GH-118044) 2024-04-19 09:26:42 +01:00
Erlend E. Aasland 757b62493b
gh-117457: Regen executor cases post PR #117477 (#117559) 2024-04-05 10:13:00 +00:00
Michael Droettboom 0edde64a41
GH-117457: Correct pystats uop "miss" counts (GH-117477) 2024-04-04 15:49:18 -07:00
Dino Viehland 434bc593df
gh-112075: Make _PyDict_LoadGlobal thread safe (#117529)
Make _PyDict_LoadGlobal threadsafe
2024-04-04 12:26:07 -07:00
Guido van Rossum 060a96f1a9
gh-116968: Reimplement Tier 2 counters (#117144)
Introduce a unified 16-bit backoff counter type (``_Py_BackoffCounter``),
shared between the Tier 1 adaptive specializer and the Tier 2 optimizer. The
API used for adaptive specialization counters is changed but the behavior is
(supposed to be) identical.

The behavior of the Tier 2 counters is changed:
- There are no longer dynamic thresholds (we never varied these).
- All counters now use the same exponential backoff.
- The counter for ``JUMP_BACKWARD`` starts counting down from 16.
- The ``temperature`` in side exits starts counting down from 64.
2024-04-04 15:03:27 +00:00
Peter Lazorchak 1c43468886
gh-116168: Remove extra `_CHECK_STACK_SPACE` uops (#117242)
This merges all `_CHECK_STACK_SPACE` uops in a trace into a single `_CHECK_STACK_SPACE_OPERAND` uop that checks whether there is enough stack space for all calls included in the entire trace.
2024-04-03 17:14:18 +00:00
Mark Shannon c32dc47aca
GH-115776: Embed the values array into the object, for "normal" Python objects. (GH-116115) 2024-04-02 11:59:21 +01:00
Sam Gross 19c1dd60c5
gh-117323: Make `cell` thread-safe in free-threaded builds (#117330)
Use critical sections to lock around accesses to cell contents. The critical sections are no-ops in the default (with GIL) build.
2024-03-29 13:35:43 -04:00
Mark Shannon bf82f77957
GH-116422: Tier2 hot/cold splitting (GH-116813)
Splits the "cold" path, deopts and exits, from the "hot" path, reducing the size of most jitted instructions, at the cost of slower exits.
2024-03-26 09:35:11 +00:00