* Fix use after free in list objects
Set the items pointer in the list object to NULL after the items array
is freed during list deallocation. Otherwise, we can end up with a list
object added to the free list that contains a pointer to an already-freed
items array.
* Mark `_PyList_FromStackRefStealOnSuccess` as escaping
I think technically it's not escaping, because the only object that
can be decrefed if allocation fails is an exact list, which cannot
execute arbitrary code when it is destroyed. However, this seems less
intrusive than trying to special cases objects in the assert in `_Py_Dealloc`
that checks for non-null stackpointers and shouldn't matter for performance.
* Add location information when accessing already closed stackref
* Add #def option to track closed stackrefs to provide precise information for use after free and double frees.
* Combine _GUARD_GLOBALS_VERSION_PUSH_KEYS and _LOAD_GLOBAL_MODULE_FROM_KEYS into _LOAD_GLOBAL_MODULE
* Combine _GUARD_BUILTINS_VERSION_PUSH_KEYS and _LOAD_GLOBAL_BUILTINS_FROM_KEYS into _LOAD_GLOBAL_BUILTINS
* Combine _CHECK_ATTR_MODULE_PUSH_KEYS and _LOAD_ATTR_MODULE_FROM_KEYS into _LOAD_ATTR_MODULE
* Remove stack transient in LOAD_ATTR_WITH_HINT
Windows and macOS require precomputing a "timebase" in order to convert
OS timestamps into nanoseconds. Retrieve and compute this value during
runtime initialization to avoid data races when accessing the time.
The `free_work_item()` function in QSBR may call arbitrary code via
Python object destructors, which may reenter the QSBR code. Reorder
the processing of work items to be robust to reentrancy.
Also fix the TODO for the out of memory situation.
The use of PySys_GetObject() and _PySys_GetAttr(), which return a borrowed
reference, has been replaced by using one of the following functions, which
return a strong reference and distinguish a missing attribute from an error:
_PySys_GetOptionalAttr(), _PySys_GetOptionalAttrString(),
_PySys_GetRequiredAttr(), and _PySys_GetRequiredAttrString().
This fixes a fairly subtle bug involving finalizers and resurrection in
debug free threaded builds: if `_PyObject_ResurrectEnd` returns `1`
(i.e., the object was resurrected by a finalizer), it's not safe to
access the object because it might still be deallocated. For example:
* The finalizer may have exposed the object to another thread. That
thread may hold the last reference and concurrently deallocate it any
time after `_PyObject_ResurrectEnd()` returns `1`.
* `_PyObject_ResurrectEnd()` may call `_Py_brc_queue_object()`, which
may internally deallocate the object immediately if the owning thread
is dead.
Therefore, it's important not to access the object after it's
resurrected. We only violate this in two cases, and only in debug
builds:
* We assert that the object is tracked appropriately. This is now moved
up betewen the finalizer and the `_PyObject_ResurrectEnd()` call.
* The `--with-trace-refs` builds may need to remember the object if
it's resurrected. This is now handled by `_PyObject_ResurrectStart()`
and `_PyObject_ResurrectEnd()`.
Note that `--with-trace-refs` is currently disabled in `--disable-gil`
builds because the refchain hash table isn't thread-safe, but this
refactoring avoids an additional thread-safety issue.
* Implement C recursion protection with limit pointers for Linux, MacOS and Windows
* Remove calls to PyOS_CheckStack
* Add stack protection to parser
* Make tests more robust to low stacks
* Improve error messages for stack overflow
Revert "GH-91079: Implement C stack limits using addresses, not counters. (GH-130007)" for now
Unfortunatlely, the change broke some buildbots.
This reverts commit 2498c22fa0.
CPython current temporarily changes `PYMEM_DOMAIN_RAW` to the default
allocator during initialization and shutdown. The motivation is to
ensure that core runtime structures are allocated and freed using the
same allocator. However, modifying the current allocator changes global
state and is not thread-safe even with the GIL. Other threads may be
allocating or freeing objects use PYMEM_DOMAIN_RAW; they are not
required to hold the GIL to call PyMem_RawMalloc/PyMem_RawFree.
This adds new internal-only functions like `_PyMem_DefaultRawMalloc`
that aren't affected by calls to `PyMem_SetAllocator()`, so they're
appropriate for Python runtime initialization and finalization. Use
these calls in places where we previously swapped to the default raw
allocator.
* Implement C recursion protection with limit pointers
* Remove calls to PyOS_CheckStack
* Add stack protection to parser
* Make tests more robust to low stacks
* Improve error messages for stack overflow
* gh-129701: Fix a data race in `intern_common` in the free threaded build
* Use a mutex to avoid potentially returning a non-immortalized string,
because immortalization happens after the insertion into the interned
dict.
* Use `Py_DECREF()` calls instead of `Py_SET_REFCNT(s, Py_REFCNT(s) - 2)`
for thread-safety. This code path isn't performance sensistive, so
just use `Py_DECREF()` unconditionally for simplicity.
Use an atomic operation when setting
`_PyRuntime.signals.unhandled_keyboard_interrupt`. We now only clear the
variable at the start of `_PyRun_Main`, which is the same function where
we check it.
This avoids race conditions where previously another thread might call
`run_eval_code_obj()` and erroneously clear the unhandled keyboard
interrupt.
The MemoryError freelist was not thread-safe in the free threaded build.
Use a mutex to protect accesses to the freelist. Unlike other freelists,
the MemoryError freelist is not performance sensitive.
The stack pointers in interpreter frames are nearly always valid now, so
use them when visiting each thread's frame. For now, don't collect
objects with deferred references in the rare case that we see a frame
with a NULL stack pointer.
This reduces the size of _PyInterpreterFrame by 8 bytes on 64-bit
platforms using the free threading build due to alignment requirements.
This allows for slightly more recursive calls into the interpreter (from
C), but `test_call.test_super_deep` still crashes.
* Remove all 'if (0)' and 'if (1)' conditional stack effects
* Use array instead of conditional for BUILD_SLICE args
* Refactor LOAD_GLOBAL to use a common conditional uop
* Remove conditional stack effects from LOAD_ATTR specializations
* Replace conditional stack effects in LOAD_ATTR with a 0 or 1 sized array.
* Remove conditional stack effects from CALL_FUNCTION_EX
Remove _PyInterpreterState_GetConfigCopy() and
_PyInterpreterState_SetConfig() private functions. PEP 741 "Python
Configuration C API" added a better public C API: PyConfig_Get() and
PyConfig_Set().
In the free threading build, the per thread reference counting uses a
unique id for some objects to index into the local reference count
table. Use 0 instead of -1 to indicate that the id is not assigned. This
avoids bugs where zero-initialized heap type objects look like they have
a unique id assigned.
* Use TABLES_LOCK() to protect 'tracemalloc_config.tracing'.
* Hold TABLES_LOCK() longer while accessing tables.
* tracemalloc_realloc() and tracemalloc_free() no longer
remove the trace on reentrant call.
* _PyTraceMalloc_Stop() unregisters _PyTraceMalloc_TraceRef().
* _PyTraceMalloc_GetTraces() sets the reentrant flag.
* tracemalloc_clear_traces_unlocked() sets the reentrant flag.
Replaces the trampoline mechanism in Emscripten with an implementation that uses a
recently added feature of wasm-gc instead of JS type reflection, when that feature is
available.
Add a fast path to (single-mutex) critical section locking _iff_ the mutex
is already held by the currently active, top-most critical section of this
thread. This can matter a lot for indirectly recursive critical sections
without intervening critical sections.
Methods (functions defined in class scope) are likely to be cleaned
up by the GC anyway.
Add a new code flag, `CO_METHOD`, that is set for functions defined
in a class scope. Use that when deciding to defer functions.
* Add `_PyDictKeys_StringLookupSplit` which does locking on dict keys and
use in place of `_PyDictKeys_StringLookup`.
* Change `_PyObject_TryGetInstanceAttribute` to use that function
in the case of split keys.
* Add `unicodekeys_lookup_split` helper which allows code sharing
between `_Py_dict_lookup` and `_PyDictKeys_StringLookupSplit`.
* Fix locking for `STORE_ATTR_INSTANCE_VALUE`. Create
`_GUARD_TYPE_VERSION_AND_LOCK` uop so that object stays locked and
`tp_version_tag` cannot change.
* Pass `tp_version_tag` to `specialize_dict_access()`, ensuring
the version we store on the cache is the correct one (in case of
it changing during the specalize analysis).
* Split `analyze_descriptor` into `analyze_descriptor_load` and
`analyze_descriptor_store` since those don't share much logic.
Add `descriptor_is_class` helper function.
* In `specialize_dict_access`, double check `_PyObject_GetManagedDict()`
in case we race and dict was materialized before the lock.
* Avoid borrowed references in `_Py_Specialize_StoreAttr()`.
* Use `specialize()` and `unspecialize()` helpers.
* Add unit tests to ensure specializing happens as expected in FT builds.
* Add unit tests to attempt to trigger data races (useful for running under TSAN).
* Add `has_split_table` function to `_testinternalcapi`.
- Add a helper to set an error from locale-encoded `char*`
- Use the helper for gdbm & dlerror messages
Co-authored-by: Victor Stinner <vstinner@python.org>
We use the same approach that was used for specialization of LOAD_GLOBAL in free-threaded builds:
_CHECK_ATTR_MODULE is renamed to _CHECK_ATTR_MODULE_PUSH_KEYS; it pushes the keys object for the following _LOAD_ATTR_MODULE_FROM_KEYS (nee _LOAD_ATTR_MODULE). This arrangement avoids having to recheck the keys version.
_LOAD_ATTR_MODULE is renamed to _LOAD_ATTR_MODULE_FROM_KEYS; it loads the value from the keys object pushed by the preceding _CHECK_ATTR_MODULE_PUSH_KEYS at the cached index.
Objects may be temporarily "resurrected" in destructors when calling
finalizers or watcher callbacks. We previously undid the resurrection
by decrementing the reference count using `Py_SET_REFCNT`. This was not
thread-safe because other threads might be accessing the object
(modifying its reference count) if it was exposed by the finalizer,
watcher callback, or temporarily accessed by a racy dictionary or list
access.
This adds internal-only thread-safe functions for temporary object
resurrection during destructors.
The CALL family of instructions were mostly thread-safe already and only required a small number of changes, which are documented below.
A few changes were needed to make CALL_ALLOC_AND_ENTER_INIT thread-safe:
Added _PyType_LookupRefAndVersion, which returns the type version corresponding to the returned ref.
Added _PyType_CacheInitForSpecialization, which takes an init method and the corresponding type version and only populates the specialization cache if the current type version matches the supplied version. This prevents potentially caching a stale value in free-threaded builds if we race with an update to __init__.
Only cache __init__ functions that are deferred in free-threaded builds. This ensures that the reference to __init__ that is stored in the specialization cache is valid if the type version guard in _CHECK_AND_ALLOCATE_OBJECT passes.
Fix a bug in _CREATE_INIT_FRAME where the frame is pushed to the stack on failure.
A few other miscellaneous changes were also needed:
Use {LOCK,UNLOCK}_OBJECT in LIST_APPEND. This ensures that the list's per-object lock is held while we are appending to it.
Add missing co_tlbc for _Py_InitCleanup.
Stop/start the world around setting the eval frame hook. This allows us to read interp->eval_frame non-atomically and preserves the behavior of _CHECK_PEP_523 documented below.
* Mark almost all reachable objects before doing collection phase
* Add stats for objects marked
* Visit new frames before each increment
* Update docs
* Clearer calculation of work to do.
"Generally, mixed-mode arithmetic combining real and complex variables should
be performed directly, not by first coercing the real to complex, lest the sign
of zero be rendered uninformative; the same goes for combinations of pure
imaginary quantities with complex variables." (c) Kahan, W: Branch cuts for
complex elementary functions.
This patch implements mixed-mode arithmetic rules, combining real and
complex variables as specified by C standards since C99 (in particular,
there is no special version for the true division with real lhs
operand). Most C compilers implementing C99+ Annex G have only these
special rules (without support for imaginary type, which is going to be
deprecated in C2y).
This gets rid of the immortal check in `PyStackRef_FromPyObjectSteal()`.
Overall, this improves performance about 2% in the free threading
build.
This also renames `PyStackRef_Is()` to `PyStackRef_IsExactly()` because
the macro requires that the tag bits of the arguments match, which is
only true in certain special cases.
Add free-threaded specialization for `UNPACK_SEQUENCE` opcode.
`UNPACK_SEQUENCE_TUPLE/UNPACK_SEQUENCE_TWO_TUPLE` are already thread safe since tuples are immutable.
`UNPACK_SEQUENCE_LIST` is not thread safe because of nature of lists (there is nothing preventing another thread from adding items to or removing them the list while the instruction is executing). To achieve thread safety we add a critical section to the implementation of `UNPACK_SEQUENCE_LIST`, especially around the parts where we check the size of the list and push items onto the stack.
---------
Co-authored-by: Matt Page <mpage@meta.com>
Co-authored-by: mpage <mpage@cs.stanford.edu>
Enable specialization of LOAD_GLOBAL in free-threaded builds.
Thread-safety of specialization in free-threaded builds is provided by the following:
A critical section is held on both the globals and builtins objects during specialization. This ensures we get an atomic view of both builtins and globals during specialization.
Generation of new keys versions is made atomic in free-threaded builds.
Existing helpers are used to atomically modify the opcode.
Thread-safety of specialized instructions in free-threaded builds is provided by the following:
Relaxed atomics are used when loading and storing dict keys versions. This avoids potential data races as the dict keys versions are read without holding the dictionary's per-object lock in version guards.
Dicts keys objects are passed from keys version guards to the downstream uops. This ensures that we are loading from the correct offset in the keys object. Once a unicode key has been stored in a keys object for a combined dictionary in free-threaded builds, the offset that it is stored in will never be reused for a different key. Once the version guard passes, we know that we are reading from the correct offset.
The dictionary read fast-path is used to read values from the dictionary once we know the correct offset.
This is a precursor to the actual fix for gh-114940, where we will change these macros to use the new lock. This change is almost entirely mechanical; the exceptions are the loops in codeobject.c and ceval.c, which now hold the "head" lock. Note that almost all of the uses of _Py_FOR_EACH_TSTATE_UNLOCKED() here will change to _Py_FOR_EACH_TSTATE_BEGIN() once we add the new per-interpreter lock.
This approach eliminates the originally reported race. It also gets rid of the deadlock reported in gh-96071, so we can remove the workaround added then.
* Mark almost all reachable objects before doing collection phase
* Add stats for objects marked
* Visit new frames before each increment
* Remove lazy dict tracking
* Update docs
* Clearer calculation of work to do.
The PyMutex implementation supports unlocking after fork because we
clear the list of waiters in parking_lot.c. This doesn't work as well
for _PyRecursiveMutex because on some systems, such as SerenityOS, the
thread id is not preserved across fork().
These changes makes it easier to backport the _interpreters, _interpqueues, and _interpchannels modules to Python 3.12.
This involves the following:
* add the _PyXI_GET_STATE() and _PyXI_GET_GLOBAL_STATE() macros
* add _PyXIData_lookup_context_t and _PyXIData_GetLookupContext()
* add _Py_xi_state_init() and _Py_xi_state_fini()
These changes makes it easier to backport the _interpreters, _interpqueues, and _interpchannels modules to Python 3.12.
This involves the following:
* rename several structs and typedefs
* add several typedefs
* stop using the PyThreadState.state field directly in parking_lot.c
Move creation of a tuple for var-positional parameter out of
_PyArg_UnpackKeywordsWithVararg().
Merge _PyArg_UnpackKeywordsWithVararg() with _PyArg_UnpackKeywords().
Add a new parameter in _PyArg_UnpackKeywords().
The "parameters" and "converters" attributes of ParseArgsCodeGen no
longer contain the var-positional parameter. It is now available as the
"varpos" attribute. Optimize code generation for var-positional
parameter and reuse the same generating code for functions with and without
keyword parameters.
Add special converters for var-positional parameter. "tuple" represents it as
a Python tuple and "array" represents it as a continuous array of PyObject*.
"object" is a temporary alias of "tuple".
The primary objective here is to allow some later changes to be cleaner. Mostly this involves renaming things and moving a few things around.
* CrossInterpreterData -> XIData
* crossinterpdatafunc -> xidatafunc
* split out pycore_crossinterp_data_registry.h
* add _PyXIData_lookup_t
Each thread specializes a thread-local copy of the bytecode, created on the first RESUME, in free-threaded builds. All copies of the bytecode for a code object are stored in the co_tlbc array on the code object. Threads reserve a globally unique index identifying its copy of the bytecode in all co_tlbc arrays at thread creation and release the index at thread destruction. The first entry in every co_tlbc array always points to the "main" copy of the bytecode that is stored at the end of the code object. This ensures that no bytecode is copied for programs that do not use threads.
Thread-local bytecode can be disabled at runtime by providing either -X tlbc=0 or PYTHON_TLBC=0. Disabling thread-local bytecode also disables specialization.
Concurrent modifications to the bytecode made by the specializing interpreter and instrumentation use atomics, with specialization taking care not to overwrite an instruction that was instrumented concurrently.
* Remove `@suppress_immortalization` decorator
* Make suppression flag per-thread instead of per-interpreter
* Suppress immortalization in `eval()` to avoid refleaks in three tests
(test_datetime.test_roundtrip, test_logging.test_config8_ok, and
test_random.test_after_fork).
* frozenset() is constant, but not a singleton. When run multiple times,
the test could fail due to constant interning.
This replaces `_PyEval_BuiltinsFromGlobals` with
`_PyDict_LoadBuiltinsFromGlobals`, which returns a new reference
instead of a borrowed reference. Internally, the new function uses
per-thread reference counting when possible to avoid contention on the
refcount fields on the builtins module.
This fixes a crash when `gc.get_objects()` or `gc.get_referrers()` is
called during a GC in the free threading build.
Switch to `_PyObjectStack` to avoid corrupting the `struct worklist`
linked list maintained by the GC. Also, don't return objects that are frozen
(`gc.freeze()`) or in the process of being collected to more closely match
the behavior of the default build.
This is essentially a cleanup, moving a handful of API declarations to the header files where they fit best, creating new ones when needed.
We do the following:
* add pycore_debug_offsets.h and move _Py_DebugOffsets, etc. there
* inline struct _getargs_runtime_state and struct _gilstate_runtime_state in _PyRuntimeState
* move struct _reftracer_runtime_state to the existing pycore_object_state.h
* add pycore_audit.h and move to it _Py_AuditHookEntry , _PySys_Audit(), and _PySys_ClearAuditHooks
* add audit.h and cpython/audit.h and move the existing audit-related API there
*move the perfmap/trampoline API from cpython/sysmodule.h to cpython/ceval.h, and remove the now-empty cpython/sysmodule.h
On Arm v5 it is not possible to get the thread ID via c13 register
hence the illegal instruction. The c13 register started to provide
thread ID since Arm v6K architecture variant. Other variants of
Arm v6 (T2, Z and base) don’t provide the thread ID via c13.
For the sake of simplicity we group v5 and v6 together and
consider that instructions for Arm v7 only.
Use per-thread refcounting for the reference from function objects to
their corresponding code object. This can be a source of contention when
frequently creating nested functions. Deferred refcounting alone isn't a
great fit here because these references are on the heap and may be
modified by other libraries.