Make `warnings.catch_warnings()` use a context variable for holding
the warning filtering state if the `sys.flags.context_aware_warnings`
flag is set to true. This makes using the context manager thread-safe in
multi-threaded programs.
Add the `sys.flags.thread_inherit_context` flag. If true, starting a new
thread with `threading.Thread` will use a copy of the context
from the caller of `Thread.start()`.
Both these flags are set to true by default for the free-threaded build
and false for the default build.
Move the Python implementation of warnings.py into _py_warnings.py.
Make _contextvars a builtin module.
Co-authored-by: Kumar Aditya <kumaraditya@python.org>
Optimize `LOAD_FAST` opcodes into faster versions that load borrowed references onto the operand stack when we can prove that the lifetime of the local outlives the lifetime of the temporary that is loaded onto the stack.
Use the standard `__ARM_ARCH` macro, which is supported by GCC and Clang.
The branching logic for of `__ARMEL__` has been removed so if the target
architecture supports v7+ instructions, a yield is emitted, otherwise a nop
is emitted. This covers both big and little endian scenarios.
Signed-off-by: Vincent Fazio <vfazio@gmail.com>
In the free threaded build, the `_PyObject_LookupSpecial()` call can lead to
reference count contention on the returned function object becuase it
doesn't use stackrefs. Refactor some of the callers to use
`_PyObject_MaybeCallSpecialNoArgs`, which uses stackrefs internally.
This fixes the scaling bottleneck in the "lookup_special" microbenchmark
in `ftscalingbench.py`. However, the are still some uses of
`_PyObject_LookupSpecial()` that need to be addressed in future PRs.
Concurrent accesses from multiple threads to the same `cell` object did not
scale well in the free-threaded build. Use `_PyStackRef` and optimistically
avoid locking to improve scaling.
With the locks around cell reads gone, some of the free threading tests were
prone to starvation: the readers were able to run in a tight loop and the
writer threads weren't scheduled frequently enough to make timely progress.
Adjust the tests to avoid this.
Co-authored-by: Donghee Na <donghee.na@python.org>
* Rename 'defined' attribute to 'in_local' to more accurately reflect how it is used
* Make death of variables explicit even for array variables.
* Convert in_memory from boolean to stack offset
* Don't apply liveness analysis to optimizer generated code
* Fix RETURN_VALUE in optimizer
This tells TSAN not to sanitize `PyUnstable_InterpreterFrame_GetLine()`.
There's a possible data race on the access to the frame's `instr_ptr`
if the frame is currently executing. We don't really care about the
race. In theory, we could use relaxed atomics for every access to
`instr_ptr`, but that would create more code churn and current compilers
are overly conservative with optimizations around relaxed atomic
accesses.
We also don't sanitize `_PyFrame_IsIncomplete()` because it accesses
`instr_ptr` and is called from assertions within PyFrame_GetCode().
- Restore max field size to sys.maxsize, as in Python 3.13 & below
- PyCField: Split out bit/byte sizes/offsets.
- Expose CField's size/offset data to Python code
- Add generic checks for all the test structs/unions, using the newly exposed attrs
* Move _Py_VISIT_STACKREF() from pycore_gc.h to pycore_stackref.h.
* Remove pycore_interpframe.h include from pycore_genobject.h.
* Remove now useless includes from C files.
* Add pycore_interpframe_structs.h to Makefile.pre.in and
pythoncore.vcxproj.
Move pycore_obmalloc.h include from pycore_interp_structs.h to
pycore_runtime_structs.h.
Add also comment explaining the purpose of each include in
pycore_interp_structs.h, pycore_runtime_structs.h and
pycore_structs.h.
Remove <stdbool.h> and <stddef.h> from pycore_structs.h.