Commit Graph

444 Commits

Author SHA1 Message Date
Mark Shannon 517dc65ffc
GH-128682: Stronger checking of `PyStackRef_CLOSE` and `DEAD`. (GH-128683) 2025-01-13 12:37:48 +00:00
Mark Shannon ddd959987c
GH-128685: Specialize (rather than quicken) LOAD_CONST into LOAD_CONST_[IM]MORTAL (GH-128708) 2025-01-13 10:30:28 +00:00
Brandt Bucher 65ae3d5a73
GH-127809: Fix the JIT's understanding of ** (GH-127844) 2025-01-07 17:25:48 -08:00
T. Wouters 8f93dd8a8f
gh-115999: Add free-threaded specialization for COMPARE_OP (#126410)
Add free-threaded specialization for COMPARE_OP, and tests for COMPARE_OP specialization in general.

Co-authored-by: Donghee Na <donghee.na92@gmail.com>
2025-01-07 06:41:01 -08:00
Mark Shannon f826beca0c
GH-128375: Better instrument for `FOR_ITER` (GH-128445) 2025-01-06 17:54:47 +00:00
Ken Jin 7ef4907412
gh-128262: Allow specialization of calls to classes with __slots__ (GH-128263) 2024-12-31 12:24:17 +08:00
Mark Shannon 128cc47fbd
GH-127705: Add debug mode for `_PyStackRef`s inspired by HPy debug mode (GH-128121) 2024-12-20 16:52:20 +00:00
Neil Schemenauer 1b15c89a17
gh-115999: Specialize `STORE_ATTR` in free-threaded builds. (gh-127838)
* Add `_PyDictKeys_StringLookupSplit` which does locking on dict keys and
  use in place of `_PyDictKeys_StringLookup`.

* Change `_PyObject_TryGetInstanceAttribute` to use that function
  in the case of split keys.

* Add `unicodekeys_lookup_split` helper which allows code sharing
  between `_Py_dict_lookup` and `_PyDictKeys_StringLookupSplit`.

* Fix locking for `STORE_ATTR_INSTANCE_VALUE`.  Create
  `_GUARD_TYPE_VERSION_AND_LOCK` uop so that object stays locked and
  `tp_version_tag` cannot change.

* Pass `tp_version_tag` to `specialize_dict_access()`, ensuring
  the version we store on the cache is the correct one (in case of
  it changing during the specalize analysis).

* Split `analyze_descriptor` into `analyze_descriptor_load` and
  `analyze_descriptor_store` since those don't share much logic.
  Add `descriptor_is_class` helper function.

* In `specialize_dict_access`, double check `_PyObject_GetManagedDict()`
  in case we race and dict was materialized before the lock.

* Avoid borrowed references in `_Py_Specialize_StoreAttr()`.

* Use `specialize()` and `unspecialize()` helpers.

* Add unit tests to ensure specializing happens as expected in FT builds.

* Add unit tests to attempt to trigger data races (useful for running under TSAN).

* Add `has_split_table` function to `_testinternalcapi`.
2024-12-19 10:21:17 -08:00
Mark Shannon d2f1d917e8
GH-122548: Implement branch taken and not taken events for sys.monitoring (GH-122564) 2024-12-19 16:59:51 +00:00
Donghee Na 48c70b8f7d
gh-115999: Enable BINARY_SUBSCR_GETITEM for free-threaded build (gh-127737) 2024-12-19 11:08:17 +09:00
mpage 2de048ce79
gh-115999: Specialize loading attributes from modules in free-threaded builds (#127711)
We use the same approach that was used for specialization of LOAD_GLOBAL in free-threaded builds:

_CHECK_ATTR_MODULE is renamed to _CHECK_ATTR_MODULE_PUSH_KEYS; it pushes the keys object for the following _LOAD_ATTR_MODULE_FROM_KEYS (nee _LOAD_ATTR_MODULE). This arrangement avoids having to recheck the keys version.

_LOAD_ATTR_MODULE is renamed to _LOAD_ATTR_MODULE_FROM_KEYS; it loads the value from the keys object pushed by the preceding _CHECK_ATTR_MODULE_PUSH_KEYS at the cached index.
2024-12-13 10:17:16 -08:00
Pieter Eendebak 5fc6bb2754
gh-126868: Add freelist for compact int objects (GH-126865) 2024-12-13 10:06:26 +00:00
mpage c84928ed6d
gh-115999: Specialize `CALL_KW` in free-threaded builds (#127713)
* Enable specialization of CALL_KW

* Fix bug pushing frame in _PY_FRAME_KW

`_PY_FRAME_KW` pushes a pointer to the new frame onto the stack for
consumption by the next uop. When pushing the frame fails, we do not
want to push the result, `NULL`, to the stack because it is not
a valid stackref. This works in the default build because `PyStackRef_NULL`
 and `NULL` are the same value, so the `PyStackRef_XCLOSE()` in the error
handler ignores it. In the free-threaded build the values are not the same;
`PyStackRef_XCLOSE()` will attempt to decref a null pointer.
2024-12-11 15:18:22 -08:00
mpage dabcecfd6d
gh-115999: Enable specialization of `CALL` instructions in free-threaded builds (#127123)
The CALL family of instructions were mostly thread-safe already and only required a small number of changes, which are documented below.

A few changes were needed to make CALL_ALLOC_AND_ENTER_INIT thread-safe:

Added _PyType_LookupRefAndVersion, which returns the type version corresponding to the returned ref.

Added _PyType_CacheInitForSpecialization, which takes an init method and the corresponding type version and only populates the specialization cache if the current type version matches the supplied version. This prevents potentially caching a stale value in free-threaded builds if we race with an update to __init__.

Only cache __init__ functions that are deferred in free-threaded builds. This ensures that the reference to __init__ that is stored in the specialization cache is valid if the type version guard in _CHECK_AND_ALLOCATE_OBJECT passes.
Fix a bug in _CREATE_INIT_FRAME where the frame is pushed to the stack on failure.

A few other miscellaneous changes were also needed:

Use {LOCK,UNLOCK}_OBJECT in LIST_APPEND. This ensures that the list's per-object lock is held while we are appending to it.

Add missing co_tlbc for _Py_InitCleanup.

Stop/start the world around setting the eval frame hook. This allows us to read interp->eval_frame non-atomically and preserves the behavior of _CHECK_PEP_523 documented below.
2024-12-03 11:20:20 -08:00
Neil Schemenauer 276cd66ccb
gh-115999: Add free-threaded specialization for `SEND` (gh-127426)
No additional thread safety changes are required.  Note that sending to
a generator that is shared between threads is currently not safe in the
free-threaded build.
2024-12-03 10:25:12 -08:00
Neil Schemenauer 0cb5222079
gh-115999: Specialize `LOAD_SUPER_ATTR` in free-threaded builds (gh-127128)
Use existing helpers to atomically modify the bytecode.  Add unit tests
to ensure specializing is happening as expected.  Add test_specialize.py
that can be used with ThreadSanitizer to detect data races.  
Fix thread safety issue with cell_set_contents().
2024-12-03 09:32:26 -08:00
Donghee Na 7c2bd9b226
gh-115999: Use light-weight lock for UNPACK_SEQUENCE_LIST (gh-127514) 2024-12-03 00:14:40 +09:00
Donghee Na e2713409cf
gh-115999: Add partial free-thread specialization for BINARY_SUBSCR (gh-127227) 2024-12-02 10:38:17 +09:00
Sam Gross 71ede1142d
gh-115999: Add free-threaded specialization for `STORE_SUBSCR` (#127169)
The specialization only depends on the type, so no special thread-safety
considerations there.

STORE_SUBSCR_LIST_INT needs to lock the list before modifying it.

`_PyDict_SetItem_Take2` already internally locks the dictionary using a
critical section.
2024-11-26 16:46:06 -05:00
Sam Gross 4759ba6eec
gh-127022: Simplify `PyStackRef_FromPyObjectSteal` (#127024)
This gets rid of the immortal check in `PyStackRef_FromPyObjectSteal()`.
Overall, this improves performance about 2% in the free threading
build.

This also renames `PyStackRef_Is()` to `PyStackRef_IsExactly()` because
the macro requires that the tag bits of the arguments match, which is
only true in certain special cases.
2024-11-22 12:55:33 -05:00
Kirill Podoprigora 27486c3365
gh-115999: Add free-threaded specialization for `UNPACK_SEQUENCE` (#126600)
Add free-threaded specialization for `UNPACK_SEQUENCE` opcode.
`UNPACK_SEQUENCE_TUPLE/UNPACK_SEQUENCE_TWO_TUPLE` are already thread safe since tuples are immutable.
`UNPACK_SEQUENCE_LIST` is not thread safe because of nature of lists (there is nothing preventing another thread from adding items to or removing them the list while the instruction is executing). To achieve thread safety we add a critical section to the implementation of `UNPACK_SEQUENCE_LIST`, especially around the parts where we check the size of the list and push items onto the stack.


---------

Co-authored-by: Matt Page <mpage@meta.com>
Co-authored-by: mpage <mpage@cs.stanford.edu>
2024-11-22 19:00:35 +02:00
Donghee Na 78a530a578
gh-115999: Add free-threaded specialization for ``TO_BOOL`` (gh-126616) 2024-11-22 07:52:16 +09:00
mpage 09c240f20c
gh-115999: Specialize `LOAD_GLOBAL` in free-threaded builds (#126607)
Enable specialization of LOAD_GLOBAL in free-threaded builds.

Thread-safety of specialization in free-threaded builds is provided by the following:

A critical section is held on both the globals and builtins objects during specialization. This ensures we get an atomic view of both builtins and globals during specialization.
Generation of new keys versions is made atomic in free-threaded builds.
Existing helpers are used to atomically modify the opcode.
Thread-safety of specialized instructions in free-threaded builds is provided by the following:

Relaxed atomics are used when loading and storing dict keys versions. This avoids potential data races as the dict keys versions are read without holding the dictionary's per-object lock in version guards.
Dicts keys objects are passed from keys version guards to the downstream uops. This ensures that we are loading from the correct offset in the keys object. Once a unicode key has been stored in a keys object for a combined dictionary in free-threaded builds, the offset that it is stored in will never be reused for a different key. Once the version guard passes, we know that we are reading from the correct offset.
The dictionary read fast-path is used to read values from the dictionary once we know the correct offset.
2024-11-21 11:22:21 -08:00
Mark Shannon aea0c586d1
GH-127010: Don't lazily track and untrack dicts (GH-127027) 2024-11-20 16:41:20 +00:00
Brandt Bucher 48c50ff1a2
GH-126892: Reset warmup counters when JIT compiling code (GH-126893) 2024-11-20 08:11:25 -08:00
Hugo van Kemenade 899fdb213d
Revert "GH-126491: GC: Mark objects reachable from roots before doing cycle collection (GH-126502)" (#126983) 2024-11-19 11:25:09 +02:00
Mark Shannon b0fcc2c47a
GH-126491: GC: Mark objects reachable from roots before doing cycle collection (GH-126502)
* Mark almost all reachable objects before doing collection phase

* Add stats for objects marked

* Visit new frames before each increment

* Remove lazy dict tracking

* Update docs

* Clearer calculation of work to do.
2024-11-18 14:31:26 +00:00
Sergey B Kirpichev d9e251223e
gh-103951: enable optimization for fast attribute access on module subclasses (GH-126264)
Co-authored-by: Nicolas Tessore <n.tessore@ucl.ac.uk>
2024-11-15 16:03:38 +08:00
Ken Jin 6293d00e72
gh-120619: Strength reduce function guards, support 2-operand uop forms (GH-124846)
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
2024-11-09 11:35:33 +08:00
Donghee Na 4ea214ea98
gh-115999: Add free-threaded specialization for CONTAINS_OP (gh-126450)
- The specialization logic determines the appropriate specialization using only the operand's type, which is safe to read non-atomically (changing it requires stopping the world). We are guaranteed that the type will not change in between when it is checked and when we specialize the bytecode because the types involved are immutable (you cannot assign to `__class__` for exact instances of `dict`, `set`, or `frozenset`). The bytecode is mutated atomically using helpers.
- The specialized instructions rely on the operand type not changing in between the `DEOPT_IF` checks and the calls to the appropriate type-specific helpers (e.g. `_PySet_Contains`). This is a correctness requirement in the default builds and there are no changes to the opcodes in the free-threaded builds that would invalidate this.
2024-11-06 03:35:10 +00:00
Peter Bierma 1371295e67
gh-126366: Fix crash if `__iter__` raises an exception during `yield from` (#126369) 2024-11-05 15:26:36 +05:30
Kirill Podoprigora 78015818c2
gh-126415: Fix conversion warning in `Python/bytecodes.c` (#126416)
Fix conversion warning in bytecodes

Co-authored-by: mpage <mpage@cs.stanford.edu>
2024-11-05 04:12:31 +02:00
mpage 2e95c5ba3b
gh-115999: Implement thread-local bytecode and enable specialization for `BINARY_OP` (#123926)
Each thread specializes a thread-local copy of the bytecode, created on the first RESUME, in free-threaded builds. All copies of the bytecode for a code object are stored in the co_tlbc array on the code object. Threads reserve a globally unique index identifying its copy of the bytecode in all co_tlbc arrays at thread creation and release the index at thread destruction. The first entry in every co_tlbc array always points to the "main" copy of the bytecode that is stored at the end of the code object. This ensures that no bytecode is copied for programs that do not use threads.

Thread-local bytecode can be disabled at runtime by providing either -X tlbc=0 or PYTHON_TLBC=0. Disabling thread-local bytecode also disables specialization.

Concurrent modifications to the bytecode made by the specializing interpreter and instrumentation use atomics, with specialization taking care not to overwrite an instruction that was instrumented concurrently.
2024-11-04 11:13:32 -08:00
Tomas R. aab58a93ef
gh-118423: Add `INSTRUCTION_SIZE` macro to code generator (GH-125467) 2024-10-29 17:25:05 +00:00
Mark Shannon faa3272fb8
GH-125837: Split `LOAD_CONST` into three. (GH-125972)
* Add LOAD_CONST_IMMORTAL opcode

* Add LOAD_SMALL_INT opcode

* Remove RETURN_CONST opcode
2024-10-29 11:15:42 +00:00
Mark Shannon 25441592db
GH-125515: Reduce number of compiler warnings in generated code (GH-125697) 2024-10-28 10:30:31 +00:00
Mark Shannon b61fece852
GH-125868: Fix STORE_ATTR_WITH_HINT specialization (GH-125876) 2024-10-24 11:57:02 +01:00
mpage de5a6c7c7d
gh-121459: Fix a couple of uses of `PyStackRef_FromPyObjectSteal` (#125711)
* Fix usage of PyStackRef_FromPyObjectSteal in CALL_TUPLE_1

This was missed in gh-124894

* Fix usage of PyStackRef_FromPyObjectSteal in _CALL_STR_1

This was missed in gh-124894

* Regenerate code
2024-10-21 11:08:13 -07:00
Eric Snow 6d93690954
gh-125604: Move _Py_AuditHookEntry, etc. Out of pycore_runtime.h (gh-125605)
This is essentially a cleanup, moving a handful of API declarations to the header files where they fit best, creating new ones when needed.

We do the following:

* add pycore_debug_offsets.h and move _Py_DebugOffsets, etc. there
* inline struct _getargs_runtime_state and struct _gilstate_runtime_state in _PyRuntimeState
* move struct _reftracer_runtime_state to the existing pycore_object_state.h
* add pycore_audit.h and move to it _Py_AuditHookEntry , _PySys_Audit(), and _PySys_ClearAuditHooks
* add audit.h and cpython/audit.h and move the existing audit-related API there
*move the perfmap/trampoline API from cpython/sysmodule.h to cpython/ceval.h, and remove the now-empty cpython/sysmodule.h
2024-10-18 09:26:08 -06:00
sobolevn 0c8c665581
gh-125470: Fix warning in `Python/generated_cases.c.h` (#125471)
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>
2024-10-14 23:46:17 +03:00
Mark Shannon 06ca33020e
GH-125323: Convert DECREF_INPUTS_AND_REUSE_FLOAT into a function that takes PyStackRefs. (GH-125439) 2024-10-14 14:18:57 +01:00
Ken Jin 4b358ee647
gh-125323: Remove some unsafe Py_DECREFs in bytecodes.c, replacing them with PyStackRef_CLOSEs (GH-125324) 2024-10-14 09:17:51 +01:00
Mark Shannon c9014374c5
GH-125174: Make immortal objects more robust, following design from PEP 683 (GH-125251) 2024-10-10 18:19:08 +01:00
mpage f978fb4f8d
gh-115999: Refactor `LOAD_GLOBAL` specializations to avoid reloading {globals, builtins} keys (gh-124953)
Each of the `LOAD_GLOBAL` specializations is implemented roughly as:

1. Load keys version.
2. Load cached keys version.
3. Deopt if (1) and (2) don't match.
4. Load keys.
5. Load cached index into keys.
6. Load object from (4) at offset from (5).

This is not thread-safe in free-threaded builds; the keys object may be replaced
in between steps (3) and (4).

This change refactors the specializations to avoid reloading the keys object and
instead pass the keys object from guards to be consumed by downstream uops.
2024-10-09 15:18:25 +00:00
Mark Shannon d1453f60c2
GH-121459: Streamline PyObject* to PyStackRef conversions by disallowing NULL pointers. (GH-124894) 2024-10-07 18:13:04 +01:00
Mark Shannon da071fa3e8
GH-119866: Spill the stack around escaping calls. (GH-124392)
* Spill the evaluation around escaping calls in the generated interpreter and JIT. 

* The code generator tracks live, cached values so they can be saved to memory when needed.

* Spills the stack pointer around escaping calls, so that the exact stack is visible to the cycle GC.
2024-10-07 14:56:39 +01:00
Mark Shannon f55273b3b7
GH-116968: Remove branch from advance_backoff_counter (GH-124469) 2024-10-07 11:46:33 +01:00
Sam Gross 5aa91c56bf
gh-124296: Remove private dictionary version tag (PEP 699) (#124472) 2024-10-01 12:39:56 -04:00
Savannah Ostrowski 65f1237098
GH-123516: Improve JIT memory consumption by invalidating cold executors (GH-124443)
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
2024-09-27 00:35:42 +00:00
Ken Jin 198756b0f6
gh-117376: Fix off-by-ones in conversion functions (GH-124301)
Fix off-by-ones in conversion function
2024-09-26 02:41:07 +08:00
Irit Katriel 78aeb38f7d
gh-124285: Fix bug where bool() is called multiple times for the same part of a boolean expression (#124394) 2024-09-25 15:51:25 +01:00
Sam Gross f4997bb3ac
gh-123923: Defer refcounting for `f_funcobj` in `_PyInterpreterFrame` (#124026)
Use a `_PyStackRef` and defer the reference to `f_funcobj` when
possible. This avoids some reference count contention in the common case
of executing the same code object from multiple threads concurrently in
the free-threaded build.
2024-09-24 20:08:18 +00:00
Ken Jin 8810e286fa
gh-121459: Deferred LOAD_GLOBAL (GH-123128)
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Sam Gross <655866+colesbury@users.noreply.github.com>
2024-09-14 00:23:51 +08:00
Sam Gross b2afe2aae4
gh-123923: Defer refcounting for `f_executable` in `_PyInterpreterFrame` (#123924)
Use a `_PyStackRef` and defer the reference to `f_executable` when
possible. This avoids some reference count contention in the common case
of executing the same code object from multiple threads concurrently in
the free-threaded build.
2024-09-12 12:37:06 -04:00
Mark Shannon 4ed7d1d6ac
GH-123996: Explicitly mark 'self_or_null' as an array of size 1 to ensure that it is kept in memory for calls (GH-124003) 2024-09-12 15:32:45 +01:00
Savannah Ostrowski 1fbc118c5d
GH-123545: Remove duplicate Py_DECREF when handling _PyOptimizer_Optimize errors (GH-123546) 2024-09-05 10:56:07 -07:00
Victor Stinner f1a0d96f41
gh-123091: Use _Py_IsImmortalLoose() (#123511)
Use _Py_IsImmortalLoose() in bytesobject.c, typeobject.c
and ceval.c.
2024-09-02 14:25:19 +02:00
Mark Shannon 54a05a4600
GH-123232: Factor BINARY_SLICE and STORE_SLICE to handle stats properly for tier 2. (GH-123381) 2024-08-27 10:49:39 +01:00
Kirill Podoprigora 67f2c84bff
gh-123205: `Python/bytecodes.c`: Fix compiler warning (#123206)
Fix MSVC warning "conversion from '__int64' to 'int'"
2024-08-23 15:35:25 -04:00
Mark Shannon 0b0f7befad
GH-123232: Fix "not specialized" stats (GH-123236) 2024-08-23 10:46:03 +01:00
Mark Shannon 5d3201fe3f
GH-123040: Specialize shadowed `LOAD_ATTR`. (GH-123219) 2024-08-23 10:22:35 +01:00
Donghee Na 297f2e093e
gh-123083: Fix a potential use-after-free in ``STORE_ATTR_WITH_HINT`` (gh-123092) 2024-08-22 23:49:09 +09:00
Mark Shannon a3d8c0542e
GH-123197: Only count an instruction as deferred if it hasn't deopted first. (GH-123222)
Only count an instruction as deferred if hasn't deopted first.
2024-08-22 14:17:10 +01:00
Mark Shannon a4fd7aa4a6
GH-115776: Allow any fixed sized object to have inline values (GH-123192) 2024-08-21 15:52:04 +01:00
Mark Shannon 7b26c4d1e3
GH-123197: Increment correct stat for CALL_KW (GH-123200) 2024-08-21 12:52:28 +01:00
Mark Shannon 1eba8bae92
GH-123185: Check for `NULL` after calling `_PyEvalFramePushAndInit` (GH-123194) 2024-08-21 12:44:56 +01:00
Mark Shannon bb1d30336e
GH-118093: Make `CALL_ALLOC_AND_ENTER_INIT` suitable for tier 2. (GH-123140)
* Convert CALL_ALLOC_AND_ENTER_INIT to micro-ops such that tier 2 supports it

* Allow inexact arguments for CALL_ALLOC_AND_ENTER_INIT.
2024-08-20 16:52:58 +01:00
Mark Shannon c13e7d98fb
GH-118093: Specialize `CALL_KW` (GH-123006) 2024-08-16 17:11:24 +01:00
Brandt Bucher f84754b705
GH-118093: Turn some DEOPT_IFs into EXIT_IFs (GH-122998) 2024-08-14 07:54:42 -07:00
Mark Shannon eec7bdaf01
GH-120024: Remove `CHECK_EVAL_BREAKER` macro. (GH-122968)
* Factor some instructions into micro-ops to isolate CHECK_EVAL_BREAKER for escape analysis

* Eliminate CHECK_EVAL_BREAKER macro
2024-08-14 12:04:05 +01:00
Brandt Bucher 9621a7d017
GH-118093: Handle some polymorphism before requiring progress in tier two (GH-122843) 2024-08-12 12:39:31 -07:00
Sam Gross ab094d1b2b
gh-117139: Replace _PyList_FromArraySteal with stack ref variant (#122830)
This replaces `_PyList_FromArraySteal` with `_PyList_FromStackRefSteal`.
It's functionally equivalent, but takes a `_PyStackRef` array instead of
an array of `PyObject` pointers.

Co-authored-by: Ken Jin <kenjin@python.org>
2024-08-12 14:49:49 -04:00
Sam Gross 736fe4d23e
gh-117139: Fix a few `_PyStackRef` related bugs (#122831)
`BUILD_SET` should use a borrow instead of a steal. The cleanup in `_DO_CALL`
`CONVERSION_FAILED` was incorrect.

Co-authored-by: Ken Jin <kenjin@python.org>
2024-08-12 14:49:33 -04:00
Sam Gross 3e753c689a
gh-118926: Spill deferred references to stack in cases generator (#122748)
This automatically spills the results from `_PyStackRef_FromPyObjectNew`
to the in-memory stack so that the deferred references are visible to
the GC before we make any possibly escaping call.

Co-authored-by: Ken Jin <kenjin@python.org>
2024-08-07 13:23:53 -04:00
sobolevn 61a8bf2853
gh-122759: Remove `assert` from `RERAISE` error handling (#122760) 2024-08-07 17:25:25 +03:00
Sam Gross 674a50ef2f
gh-117139: Fix an incorrect borrow in bytecodes.c (#122318)
`_PyDict_SetItem_Take2` steals both the key (i.e., `sub`) and the value.
2024-08-07 19:06:19 +05:30
Mark Shannon 4c31791848
GH-120024: Move three more escaping calls out of conditional statements (GH-122734) 2024-08-06 14:14:52 +01:00
Mark Shannon a8be8fc6c4
GH-120024: Refactor code a bit so that escaping calls can be wrapped in spill code in code generator (GH-122693) 2024-08-06 08:40:39 +01:00
Mark Shannon 5bd72912a1
GH-122616: Simplify LOAD_ATTR_WITH_HINT and STORE_ATTR_WITH_HINT (GH-122620) 2024-08-05 16:27:48 +01:00
Mark Shannon 7aca84e557
GH-117224: Move the body of a few large-ish micro-ops into helper functions (GH-122601) 2024-08-02 16:31:17 +01:00
Mark Shannon df13a1821a
GH-118095: Add tier two support for BINARY_SUBSCR_GETITEM (GH-120793) 2024-08-01 16:19:05 -07:00
Mark Shannon a9d56e38a0
GH-122155: Track local variables between pops and pushes in cases generator (GH-122286) 2024-08-01 09:27:26 +01:00
Brandt Bucher 15d4cd0967
GH-116090: Fire RAISE events from _FOR_ITER_TIER_TWO (GH-122413) 2024-07-29 12:17:47 -07:00
Brandt Bucher 64857d849f
GH-122294: Burn in the addresses of side exits (GH-122295) 2024-07-26 09:40:15 -07:00
Mark Shannon 95a73917cd
GH-122029: Break INSTRUMENTED_CALL into micro-ops, so that its behavior is consistent with CALL (GH-122177) 2024-07-26 14:35:57 +01:00
Mark Shannon afb0aa6ed2
GH-121131: Clean up and fix some instrumented instructions. (GH-121132)
* Add support for 'prev_instr' to code generator and refactor some INSTRUMENTED instructions
2024-07-26 12:24:12 +01:00
Brandt Bucher d9efa45d74
GH-118093: Add tier two support for BINARY_OP_INPLACE_ADD_UNICODE (GH-122253) 2024-07-25 14:45:07 -07:00
Brandt Bucher 5f6001130f
GH-118093: Add tier two support for LOAD_ATTR_PROPERTY (GH-122283) 2024-07-25 10:45:28 -07:00
Mark Shannon 5e686ff57d
GH-122034: Add StackRef variants of type checks to reduce the number of PyStackRef_AsPyObjectBorrow calls (GH-122037) 2024-07-25 18:32:43 +01:00
Mark Shannon 2e14a52cce
GH-122160: Remove BUILD_CONST_KEY_MAP opcode. (GH-122164) 2024-07-25 16:24:29 +01:00
Brandt Bucher 794546fd53
GH-118093: Remove invalidated executors from side exits (GH-121885) 2024-07-24 09:16:30 -07:00
Brandt Bucher 7b36b67b1e
GH-118093: Add tier two support to several instructions (GH-121884) 2024-07-18 14:24:58 -07:00
Mark Shannon 3eacfc1a4d
GH-121784: Generate an error during code gen if a variable is marked `unused`, but is used and thus cached in a prior uop. (#121788)
* Reject uop definitions that declare values as 'unused' that are already cached by prior uops

* Track which variables are defined and only load from memory when needed

* Support explicit `flush` in macro definitions. 

* Make sure stack is flushed in where needed.
2024-07-18 12:49:24 +01:00
Mark Shannon 8ad6067bd4
GH-121012: Set index to -1 when list iterators become exhausted in tier 2 (GH-121483) 2024-07-08 14:20:13 +01:00
Sam Gross 8e8d202f55
gh-117139: Add _PyTuple_FromStackRefSteal and use it (#121244)
Avoids the extra conversion from stack refs to PyObjects.
2024-07-02 12:30:14 -04:00
Brandt Bucher 33903c53db
GH-116017: Get rid of _COLD_EXITs (GH-120960) 2024-07-01 13:17:40 -07:00
Ken Jin e6543daf12
gh-117139: Fix a few wrong steals in bytecodes.c (GH-121127)
Fix a few wrong steals in bytecodes.c
2024-06-29 02:14:48 +08:00
Victor Stinner 12af8ec864
gh-121040: Use __attribute__((fallthrough)) (#121044)
Fix warnings when using -Wimplicit-fallthrough compiler flag.

Annotate explicitly "fall through" switch cases with a new
_Py_FALLTHROUGH macro which uses __attribute__((fallthrough)) if
available. Replace "fall through" comments with _Py_FALLTHROUGH.

Add _Py__has_attribute() macro. No longer define __has_attribute()
macro if it's not defined. Move also _Py__has_builtin() at the top
of pyport.h.

Co-Authored-By: Nikita Sobolev <mail@sobolevn.me>
2024-06-27 09:58:44 +00:00
Ken Jin 22b0de2755
gh-117139: Convert the evaluation stack to stack refs (#118450)
This PR sets up tagged pointers for CPython.

The general idea is to create a separate struct _PyStackRef for everything on the evaluation stack to store the bits. This forces the C compiler to warn us if we try to cast things or pull things out of the struct directly.

Only for free threading: We tag the low bit if something is deferred - that means we skip incref and decref operations on it. This behavior may change in the future if Mark's plans to defer all objects in the interpreter loop pans out.

This implies a strict stack reference discipline is required. ALL incref and decref operations on stackrefs must use the stackref variants. It is unsafe to untag something then do normal incref/decref ops on it.

The new incref and decref variants are called dup and close. They mimic a "handle" API operating on these stackrefs.

Please read Include/internal/pycore_stackref.h for more information!

---------

Co-authored-by: Mark Shannon <9448417+markshannon@users.noreply.github.com>
2024-06-27 03:10:43 +08:00
Brandt Bucher a47abdb45d
GH-117062: Make _JUMP_TO_TOP a general absolute jump (GH-120854) 2024-06-24 08:35:10 -07:00