Mark a few functions used by the interpreter loop as noinline
These are all the slow path and should not be inlined into the interpreter
loop. Unfortunately, they end up being inlined with LTO and the current PGO
task.
Make `warnings.catch_warnings()` use a context variable for holding
the warning filtering state if the `sys.flags.context_aware_warnings`
flag is set to true. This makes using the context manager thread-safe in
multi-threaded programs.
Add the `sys.flags.thread_inherit_context` flag. If true, starting a new
thread with `threading.Thread` will use a copy of the context
from the caller of `Thread.start()`.
Both these flags are set to true by default for the free-threaded build
and false for the default build.
Move the Python implementation of warnings.py into _py_warnings.py.
Make _contextvars a builtin module.
Co-authored-by: Kumar Aditya <kumaraditya@python.org>
The SLP autovectorizer can cause poor code generation for opcode dispatch, negating any benefit we get from vectorization elsewhere in the interpreter loop.
Non-tuple sequences are deprecated as argument for the "(items)" format unit
in PyArg_ParseTuple() and other argument parsing functions if items contains
format units which store borrowed buffer or reference (e.g. "s" and "O").
str and bytearray are no longer accepted as valid sequences.
A new extension module, `_hmac`, now exposes the HACL* HMAC (formally verified) implementation.
The HACL* implementation is used as a fallback implementation when the OpenSSL implementation of HMAC
is not available or disabled. For now, only named hash algorithms are recognized and SIMD support provided
by HACL* for the BLAKE2 hash functions is not yet used.
CPython's pthread-based thread identifier relies on pthread_t being able
to be represented as an unsigned integer type.
This is true in most Linux libc implementations where it's defined as an
unsigned long, however musl typedefs it as a struct *.
If the pointer has the high bit set and is cast to PyThread_ident_t, the
resultant value can be sign-extended [0]. This can cause issues when
comparing against threading._MainThread's identifier. The main thread's
identifier value is retrieved via _get_main_thread_ident which is backed
by an unsigned long which truncates sign extended bits.
>>> hex(threading.main_thread().ident)
'0xb6f33f3c'
>>> hex(threading.current_thread().ident)
'0xffffffffb6f33f3c'
Work around this by conditionally compiling in some code for non-glibc
based Linux platforms that are at risk of sign-extension to return a
PyLong based on the main thread's unsigned long thread identifier if the
current thread is the main thread.
[0]: https://gcc.gnu.org/onlinedocs/gcc-14.2.0/gcc/Arrays-and-pointers-implementation.html
---------
Signed-off-by: Vincent Fazio <vfazio@gmail.com>
Optimize `LOAD_FAST` opcodes into faster versions that load borrowed references onto the operand stack when we can prove that the lifetime of the local outlives the lifetime of the temporary that is loaded onto the stack.
* Rename 'defined' attribute to 'in_local' to more accurately reflect how it is used
* Make death of variables explicit even for array variables.
* Convert in_memory from boolean to stack offset
* Don't apply liveness analyis to optimizer generated code
* Add 'out' parameter to stack.pop
Minor readability fix in PyUnstable_GC_VisitObjects
Replaces `if (visit_generation())` with `if (visit_generation() < 0)`,
since we are checking for the failure case, and it's confusing to have
that be implicitly `true`.
Also fixes a misspelt variable name.
In the free threaded build, the `_PyObject_LookupSpecial()` call can lead to
reference count contention on the returned function object becuase it
doesn't use stackrefs. Refactor some of the callers to use
`_PyObject_MaybeCallSpecialNoArgs`, which uses stackrefs internally.
This fixes the scaling bottleneck in the "lookup_special" microbenchmark
in `ftscalingbench.py`. However, the are still some uses of
`_PyObject_LookupSpecial()` that need to be addressed in future PRs.
Concurrent accesses from multiple threads to the same `cell` object did not
scale well in the free-threaded build. Use `_PyStackRef` and optimistically
avoid locking to improve scaling.
With the locks around cell reads gone, some of the free threading tests were
prone to starvation: the readers were able to run in a tight loop and the
writer threads weren't scheduled frequently enough to make timely progress.
Adjust the tests to avoid this.
Co-authored-by: Donghee Na <donghee.na@python.org>
* Rename 'defined' attribute to 'in_local' to more accurately reflect how it is used
* Make death of variables explicit even for array variables.
* Convert in_memory from boolean to stack offset
* Don't apply liveness analysis to optimizer generated code
* Fix RETURN_VALUE in optimizer
This tells TSAN not to sanitize `PyUnstable_InterpreterFrame_GetLine()`.
There's a possible data race on the access to the frame's `instr_ptr`
if the frame is currently executing. We don't really care about the
race. In theory, we could use relaxed atomics for every access to
`instr_ptr`, but that would create more code churn and current compilers
are overly conservative with optimizations around relaxed atomic
accesses.
We also don't sanitize `_PyFrame_IsIncomplete()` because it accesses
`instr_ptr` and is called from assertions within PyFrame_GetCode().
* Move _Py_VISIT_STACKREF() from pycore_gc.h to pycore_stackref.h.
* Remove pycore_interpframe.h include from pycore_genobject.h.
* Remove now useless includes from C files.
* Add pycore_interpframe_structs.h to Makefile.pre.in and
pythoncore.vcxproj.
This makes more operations on frame objects thread-safe in the free
threaded build, which fixes some data races that occurred when passing
exceptions between threads.
However, accessing local variables from another thread while its running
is still not thread-safe and may crash the interpreter.
Don't use PyObject_Free() as tp_dealloc to avoid an undefined
behavior. Instead, use the default deallocator which just calls
tp_free which is PyObject_Free().
Mark stale pull requests / stale (push) Has been skippedDetails
Tail calling interpreter / ${{ matrix.target }} (aarch64, 19, macos-14, aarch64-apple-darwin/clang) (push) Has been cancelledDetails
Tail calling interpreter / ${{ matrix.target }} (aarch64, 19, ubuntu-24.04-arm, aarch64-unknown-linux-gnu/gcc) (push) Has been cancelledDetails
Tail calling interpreter / ${{ matrix.target }} (x64, 19, windows-latest, x86_64-pc-windows-msvc/msvc) (push) Has been cancelledDetails
Tail calling interpreter / ${{ matrix.target }} (x86_64, 19, macos-13, x86_64-apple-darwin/clang) (push) Has been cancelledDetails
Tail calling interpreter / ${{ matrix.target }} (x86_64, 19, ubuntu-24.04, free-threading) (push) Has been cancelledDetails
Tail calling interpreter / ${{ matrix.target }} (x86_64, 19, ubuntu-24.04, x86_64-unknown-linux-gnu/gcc) (push) Has been cancelledDetails
Implements a workaround implementation of `pthread_get_stackaddr_np` for Emscripten.
This will be replaced by an implementation that will be included in Emscripten 4.0.6.
Add free-threaded versions of existing specialization for FOR_ITER (list, tuples, fast range iterators and generators), without significantly affecting their thread-safety. (Iterating over shared lists/tuples/ranges should be fine like before. Reusing iterators between threads is not fine, like before. Sharing generators between threads is a recipe for significant crashes, like before.)
The PyThreadState field gains a reference count field to avoid
issues with PyThreadState being a dangling pointer to freed memory.
The refcount starts with a value of two: one reference is owned by the
interpreter's linked list of thread states and one reference is owned by
the OS thread. The reference count is decremented when the thread state
is removed from the interpreter's linked list and before the OS thread
calls `PyThread_hang_thread()`. The thread that decrements it to zero
frees the `PyThreadState` memory.
The `holds_gil` field is moved out of the `_status` bit field, to avoid
a data race where on thread calls `PyThreadState_Clear()`, modifying the
`_status` bit field while the OS thread reads `holds_gil` when
attempting to acquire the GIL.
The `PyThreadState.state` field now has `_Py_THREAD_SHUTTING_DOWN` as a
possible value. This corresponds to the `_PyThreadState_MustExit()`
check. This avoids race conditions in the free threading build when
checking `_PyThreadState_MustExit()`.
* Fix use after free in list objects
Set the items pointer in the list object to NULL after the items array
is freed during list deallocation. Otherwise, we can end up with a list
object added to the free list that contains a pointer to an already-freed
items array.
* Mark `_PyList_FromStackRefStealOnSuccess` as escaping
I think technically it's not escaping, because the only object that
can be decrefed if allocation fails is an exact list, which cannot
execute arbitrary code when it is destroyed. However, this seems less
intrusive than trying to special cases objects in the assert in `_Py_Dealloc`
that checks for non-null stackpointers and shouldn't matter for performance.
* Add location information when accessing already closed stackref
* Add #def option to track closed stackrefs to provide precise information for use after free and double frees.
Writing the decimal representation of a Unicode codepoint only requires to know the number of digits.
---------
Co-authored-by: Petr Viktorin <encukou@gmail.com>
Move some `#include <stdbool.h>` after `#include "Python.h"` when `pyconfig.h` is not
included first and when we are in a platform-agnostic context. This is to avoid having
features defined by `stdbool.h` before those decided by `Python.h`.
* Combine _GUARD_GLOBALS_VERSION_PUSH_KEYS and _LOAD_GLOBAL_MODULE_FROM_KEYS into _LOAD_GLOBAL_MODULE
* Combine _GUARD_BUILTINS_VERSION_PUSH_KEYS and _LOAD_GLOBAL_BUILTINS_FROM_KEYS into _LOAD_GLOBAL_BUILTINS
* Combine _CHECK_ATTR_MODULE_PUSH_KEYS and _LOAD_ATTR_MODULE_FROM_KEYS into _LOAD_ATTR_MODULE
* Remove stack transient in LOAD_ATTR_WITH_HINT
Remove inclusions prior to Python.h.
<stdbool.h> will cause <features.h> to be included before Python.h can
define some macros to enable some additional features, causing multiple
types not to be defined down the line.
This moves `tstate_activate()` down to avoid a data race in the free
threading build on the `_PyRuntime`'s thread-local `autoTSSkey`. This
key is deleted during runtime finalization, which may happen
concurrently with a call to `_PyThreadState_Attach`.
The earlier `tstate_try/wait_attach` ensures that the thread is blocked
before it attempts to access the deleted `autoTSSkey`.
This fixes a TSAN reported data race in
`test_threading.test_import_from_another_thread`.
Windows and macOS require precomputing a "timebase" in order to convert
OS timestamps into nanoseconds. Retrieve and compute this value during
runtime initialization to avoid data races when accessing the time.
The use of PySys_GetObject() and _PySys_GetAttr(), which return a borrowed
reference, has been replaced by using one of the following functions, which
return a strong reference and distinguish a missing attribute from an error:
_PySys_GetOptionalAttr(), _PySys_GetOptionalAttrString(),
_PySys_GetRequiredAttr(), and _PySys_GetRequiredAttrString().
* Implement C recursion protection with limit pointers for Linux, MacOS and Windows
* Remove calls to PyOS_CheckStack
* Add stack protection to parser
* Make tests more robust to low stacks
* Improve error messages for stack overflow
Revert "GH-91079: Implement C stack limits using addresses, not counters. (GH-130007)" for now
Unfortunatlely, the change broke some buildbots.
This reverts commit 2498c22fa0.
As of iOS 18.3.1, enabling wasm-gc breaks the interpreter. This disables the wasm-gc
trampoline on iOS.
Co-authored-by: Hood Chatham <roberthoodchatham@gmail.com>
CPython current temporarily changes `PYMEM_DOMAIN_RAW` to the default
allocator during initialization and shutdown. The motivation is to
ensure that core runtime structures are allocated and freed using the
same allocator. However, modifying the current allocator changes global
state and is not thread-safe even with the GIL. Other threads may be
allocating or freeing objects use PYMEM_DOMAIN_RAW; they are not
required to hold the GIL to call PyMem_RawMalloc/PyMem_RawFree.
This adds new internal-only functions like `_PyMem_DefaultRawMalloc`
that aren't affected by calls to `PyMem_SetAllocator()`, so they're
appropriate for Python runtime initialization and finalization. Use
these calls in places where we previously swapped to the default raw
allocator.
* Implement C recursion protection with limit pointers
* Remove calls to PyOS_CheckStack
* Add stack protection to parser
* Make tests more robust to low stacks
* Improve error messages for stack overflow
Use an atomic operation when setting
`_PyRuntime.signals.unhandled_keyboard_interrupt`. We now only clear the
variable at the start of `_PyRun_Main`, which is the same function where
we check it.
This avoids race conditions where previously another thread might call
`run_eval_code_obj()` and erroneously clear the unhandled keyboard
interrupt.
The reference count fields, such as `ob_tid` and `ob_ref_shared`, may be
accessed concurrently in the free threading build by a `_Py_TryXGetRef`
or similar operation. The PyObject header fields will be initialized by
`_PyObject_Init`, so only call `memset()` to zero-initialize the remainder
of the allocation.
The `gc_get_refs` assertion needs to be after we check the alive and
unreachable bits. Otherwise, `ob_tid` may store the actual thread id
instead of the computed `gc_refs`, which may trigger the assertion if
the `ob_tid` looks like a negative value.
Also fix a few type warnings on 32-bit systems.
For the free-threaded version of the cyclic GC, restructure the "mark alive" phase to use software prefetch instructions. This gives a speedup in most cases when the number of objects is large enough. The prefetching is enabled conditionally based on the number of long-lived objects the GC finds.
Codegen phase has an optimization that transforms
```
LOAD_CONST x
LOAD_CONST y
LOAD_CONXT z
BUILD_LIST/BUILD_SET (3)
```
->
```
BUILD_LIST/BUILD_SET (0)
LOAD_CONST (x, y, z)
LIST_EXTEND/SET_UPDATE 1
```
This optimization has now been moved to CFG phase to make #128802 work.
Co-authored-by: Irit Katriel <1055913+iritkatriel@users.noreply.github.com>
Co-authored-by: Yan Yanchii <yyanchiy@gmail.com>