Mark a few functions used by the interpreter loop as noinline
These are all the slow path and should not be inlined into the interpreter
loop. Unfortunately, they end up being inlined with LTO and the current PGO
task.
Make `warnings.catch_warnings()` use a context variable for holding
the warning filtering state if the `sys.flags.context_aware_warnings`
flag is set to true. This makes using the context manager thread-safe in
multi-threaded programs.
Add the `sys.flags.thread_inherit_context` flag. If true, starting a new
thread with `threading.Thread` will use a copy of the context
from the caller of `Thread.start()`.
Both these flags are set to true by default for the free-threaded build
and false for the default build.
Move the Python implementation of warnings.py into _py_warnings.py.
Make _contextvars a builtin module.
Co-authored-by: Kumar Aditya <kumaraditya@python.org>
The SLP autovectorizer can cause poor code generation for opcode dispatch, negating any benefit we get from vectorization elsewhere in the interpreter loop.
Non-tuple sequences are deprecated as argument for the "(items)" format unit
in PyArg_ParseTuple() and other argument parsing functions if items contains
format units which store borrowed buffer or reference (e.g. "s" and "O").
str and bytearray are no longer accepted as valid sequences.
A new extension module, `_hmac`, now exposes the HACL* HMAC (formally verified) implementation.
The HACL* implementation is used as a fallback implementation when the OpenSSL implementation of HMAC
is not available or disabled. For now, only named hash algorithms are recognized and SIMD support provided
by HACL* for the BLAKE2 hash functions is not yet used.
CPython's pthread-based thread identifier relies on pthread_t being able
to be represented as an unsigned integer type.
This is true in most Linux libc implementations where it's defined as an
unsigned long, however musl typedefs it as a struct *.
If the pointer has the high bit set and is cast to PyThread_ident_t, the
resultant value can be sign-extended [0]. This can cause issues when
comparing against threading._MainThread's identifier. The main thread's
identifier value is retrieved via _get_main_thread_ident which is backed
by an unsigned long which truncates sign extended bits.
>>> hex(threading.main_thread().ident)
'0xb6f33f3c'
>>> hex(threading.current_thread().ident)
'0xffffffffb6f33f3c'
Work around this by conditionally compiling in some code for non-glibc
based Linux platforms that are at risk of sign-extension to return a
PyLong based on the main thread's unsigned long thread identifier if the
current thread is the main thread.
[0]: https://gcc.gnu.org/onlinedocs/gcc-14.2.0/gcc/Arrays-and-pointers-implementation.html
---------
Signed-off-by: Vincent Fazio <vfazio@gmail.com>
Optimize `LOAD_FAST` opcodes into faster versions that load borrowed references onto the operand stack when we can prove that the lifetime of the local outlives the lifetime of the temporary that is loaded onto the stack.
* Rename 'defined' attribute to 'in_local' to more accurately reflect how it is used
* Make death of variables explicit even for array variables.
* Convert in_memory from boolean to stack offset
* Don't apply liveness analyis to optimizer generated code
* Add 'out' parameter to stack.pop
Minor readability fix in PyUnstable_GC_VisitObjects
Replaces `if (visit_generation())` with `if (visit_generation() < 0)`,
since we are checking for the failure case, and it's confusing to have
that be implicitly `true`.
Also fixes a misspelt variable name.
In the free threaded build, the `_PyObject_LookupSpecial()` call can lead to
reference count contention on the returned function object becuase it
doesn't use stackrefs. Refactor some of the callers to use
`_PyObject_MaybeCallSpecialNoArgs`, which uses stackrefs internally.
This fixes the scaling bottleneck in the "lookup_special" microbenchmark
in `ftscalingbench.py`. However, the are still some uses of
`_PyObject_LookupSpecial()` that need to be addressed in future PRs.
Concurrent accesses from multiple threads to the same `cell` object did not
scale well in the free-threaded build. Use `_PyStackRef` and optimistically
avoid locking to improve scaling.
With the locks around cell reads gone, some of the free threading tests were
prone to starvation: the readers were able to run in a tight loop and the
writer threads weren't scheduled frequently enough to make timely progress.
Adjust the tests to avoid this.
Co-authored-by: Donghee Na <donghee.na@python.org>
* Rename 'defined' attribute to 'in_local' to more accurately reflect how it is used
* Make death of variables explicit even for array variables.
* Convert in_memory from boolean to stack offset
* Don't apply liveness analysis to optimizer generated code
* Fix RETURN_VALUE in optimizer
This tells TSAN not to sanitize `PyUnstable_InterpreterFrame_GetLine()`.
There's a possible data race on the access to the frame's `instr_ptr`
if the frame is currently executing. We don't really care about the
race. In theory, we could use relaxed atomics for every access to
`instr_ptr`, but that would create more code churn and current compilers
are overly conservative with optimizations around relaxed atomic
accesses.
We also don't sanitize `_PyFrame_IsIncomplete()` because it accesses
`instr_ptr` and is called from assertions within PyFrame_GetCode().
* Move _Py_VISIT_STACKREF() from pycore_gc.h to pycore_stackref.h.
* Remove pycore_interpframe.h include from pycore_genobject.h.
* Remove now useless includes from C files.
* Add pycore_interpframe_structs.h to Makefile.pre.in and
pythoncore.vcxproj.
This makes more operations on frame objects thread-safe in the free
threaded build, which fixes some data races that occurred when passing
exceptions between threads.
However, accessing local variables from another thread while its running
is still not thread-safe and may crash the interpreter.
Don't use PyObject_Free() as tp_dealloc to avoid an undefined
behavior. Instead, use the default deallocator which just calls
tp_free which is PyObject_Free().