Add free-threaded versions of existing specialization for FOR_ITER (list, tuples, fast range iterators and generators), without significantly affecting their thread-safety. (Iterating over shared lists/tuples/ranges should be fine like before. Reusing iterators between threads is not fine, like before. Sharing generators between threads is a recipe for significant crashes, like before.)
The PyThreadState field gains a reference count field to avoid
issues with PyThreadState being a dangling pointer to freed memory.
The refcount starts with a value of two: one reference is owned by the
interpreter's linked list of thread states and one reference is owned by
the OS thread. The reference count is decremented when the thread state
is removed from the interpreter's linked list and before the OS thread
calls `PyThread_hang_thread()`. The thread that decrements it to zero
frees the `PyThreadState` memory.
The `holds_gil` field is moved out of the `_status` bit field, to avoid
a data race where on thread calls `PyThreadState_Clear()`, modifying the
`_status` bit field while the OS thread reads `holds_gil` when
attempting to acquire the GIL.
The `PyThreadState.state` field now has `_Py_THREAD_SHUTTING_DOWN` as a
possible value. This corresponds to the `_PyThreadState_MustExit()`
check. This avoids race conditions in the free threading build when
checking `_PyThreadState_MustExit()`.
* Fix use after free in list objects
Set the items pointer in the list object to NULL after the items array
is freed during list deallocation. Otherwise, we can end up with a list
object added to the free list that contains a pointer to an already-freed
items array.
* Mark `_PyList_FromStackRefStealOnSuccess` as escaping
I think technically it's not escaping, because the only object that
can be decrefed if allocation fails is an exact list, which cannot
execute arbitrary code when it is destroyed. However, this seems less
intrusive than trying to special cases objects in the assert in `_Py_Dealloc`
that checks for non-null stackpointers and shouldn't matter for performance.
* Add location information when accessing already closed stackref
* Add #def option to track closed stackrefs to provide precise information for use after free and double frees.
Disable pedantic check for c++03 (unlimited API)
Also add a check for c++03 *limited* API, which passes in pedantic mode
after removing a comma in the `PySendResult` declaration, and allowing
`long long`.
* Combine _GUARD_GLOBALS_VERSION_PUSH_KEYS and _LOAD_GLOBAL_MODULE_FROM_KEYS into _LOAD_GLOBAL_MODULE
* Combine _GUARD_BUILTINS_VERSION_PUSH_KEYS and _LOAD_GLOBAL_BUILTINS_FROM_KEYS into _LOAD_GLOBAL_BUILTINS
* Combine _CHECK_ATTR_MODULE_PUSH_KEYS and _LOAD_ATTR_MODULE_FROM_KEYS into _LOAD_ATTR_MODULE
* Remove stack transient in LOAD_ATTR_WITH_HINT
Move deprecated PyUnicode API docs to new section
Move Py_UNICODE to a new "Deprecated API" section.
Formally soft-deprecate PyUnicode_READY, and move it
Document and soft-deprecate PyUnicode_IS_READY, and move it
Document PyUnicode_IS_ASCII, PyUnicode_CHECK_INTERNED
PyUnicode_New docs: Clarify requirements for "fresh" strings
PyUnicodeWriter_DecodeUTF8Stateful: Link "error-handlers"
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Windows and macOS require precomputing a "timebase" in order to convert
OS timestamps into nanoseconds. Retrieve and compute this value during
runtime initialization to avoid data races when accessing the time.
The `free_work_item()` function in QSBR may call arbitrary code via
Python object destructors, which may reenter the QSBR code. Reorder
the processing of work items to be robust to reentrancy.
Also fix the TODO for the out of memory situation.
The use of PySys_GetObject() and _PySys_GetAttr(), which return a borrowed
reference, has been replaced by using one of the following functions, which
return a strong reference and distinguish a missing attribute from an error:
_PySys_GetOptionalAttr(), _PySys_GetOptionalAttrString(),
_PySys_GetRequiredAttr(), and _PySys_GetRequiredAttrString().
This fixes a fairly subtle bug involving finalizers and resurrection in
debug free threaded builds: if `_PyObject_ResurrectEnd` returns `1`
(i.e., the object was resurrected by a finalizer), it's not safe to
access the object because it might still be deallocated. For example:
* The finalizer may have exposed the object to another thread. That
thread may hold the last reference and concurrently deallocate it any
time after `_PyObject_ResurrectEnd()` returns `1`.
* `_PyObject_ResurrectEnd()` may call `_Py_brc_queue_object()`, which
may internally deallocate the object immediately if the owning thread
is dead.
Therefore, it's important not to access the object after it's
resurrected. We only violate this in two cases, and only in debug
builds:
* We assert that the object is tracked appropriately. This is now moved
up betewen the finalizer and the `_PyObject_ResurrectEnd()` call.
* The `--with-trace-refs` builds may need to remember the object if
it's resurrected. This is now handled by `_PyObject_ResurrectStart()`
and `_PyObject_ResurrectEnd()`.
Note that `--with-trace-refs` is currently disabled in `--disable-gil`
builds because the refchain hash table isn't thread-safe, but this
refactoring avoids an additional thread-safety issue.
* Implement C recursion protection with limit pointers for Linux, MacOS and Windows
* Remove calls to PyOS_CheckStack
* Add stack protection to parser
* Make tests more robust to low stacks
* Improve error messages for stack overflow
Revert "GH-91079: Implement C stack limits using addresses, not counters. (GH-130007)" for now
Unfortunatlely, the change broke some buildbots.
This reverts commit 2498c22fa0.
CPython current temporarily changes `PYMEM_DOMAIN_RAW` to the default
allocator during initialization and shutdown. The motivation is to
ensure that core runtime structures are allocated and freed using the
same allocator. However, modifying the current allocator changes global
state and is not thread-safe even with the GIL. Other threads may be
allocating or freeing objects use PYMEM_DOMAIN_RAW; they are not
required to hold the GIL to call PyMem_RawMalloc/PyMem_RawFree.
This adds new internal-only functions like `_PyMem_DefaultRawMalloc`
that aren't affected by calls to `PyMem_SetAllocator()`, so they're
appropriate for Python runtime initialization and finalization. Use
these calls in places where we previously swapped to the default raw
allocator.
Deprecate private C API functions:
* _PyUnicodeWriter_Init()
* _PyUnicodeWriter_Finish()
* _PyUnicodeWriter_Dealloc()
* _PyUnicodeWriter_WriteChar()
* _PyUnicodeWriter_WriteStr()
* _PyUnicodeWriter_WriteSubstring()
* _PyUnicodeWriter_WriteASCIIString()
* _PyUnicodeWriter_WriteLatin1String()
These functions are not deprecated in the internal C API (if the
Py_BUILD_CORE macro is defined).
* Implement C recursion protection with limit pointers
* Remove calls to PyOS_CheckStack
* Add stack protection to parser
* Make tests more robust to low stacks
* Improve error messages for stack overflow
* gh-129701: Fix a data race in `intern_common` in the free threaded build
* Use a mutex to avoid potentially returning a non-immortalized string,
because immortalization happens after the insertion into the interned
dict.
* Use `Py_DECREF()` calls instead of `Py_SET_REFCNT(s, Py_REFCNT(s) - 2)`
for thread-safety. This code path isn't performance sensistive, so
just use `Py_DECREF()` unconditionally for simplicity.
Use an atomic operation when setting
`_PyRuntime.signals.unhandled_keyboard_interrupt`. We now only clear the
variable at the start of `_PyRun_Main`, which is the same function where
we check it.
This avoids race conditions where previously another thread might call
`run_eval_code_obj()` and erroneously clear the unhandled keyboard
interrupt.
The MemoryError freelist was not thread-safe in the free threaded build.
Use a mutex to protect accesses to the freelist. Unlike other freelists,
the MemoryError freelist is not performance sensitive.
Implement PyUnicode_KIND() and PyUnicode_DATA() as function, in
addition to the macros with the same names. The macros rely on C bit
fields which have compiler-specific layout.
The stack pointers in interpreter frames are nearly always valid now, so
use them when visiting each thread's frame. For now, don't collect
objects with deferred references in the rare case that we see a frame
with a NULL stack pointer.
Use relative includes in Include/cpython/pyatomic.h for
pyatomic_gcc.h, pyatomic_std.h and pyatomic_msc.h.
Do a similar change in Include/cpython/pythread.h for
pthread_stubs.h include.
This exposes `_Py_TryIncref` as `PyUnstable_TryIncref()` and the helper
function `_PyObject_SetMaybeWeakref` as `PyUnstable_EnableTryIncRef`.
These are helpers for dealing with unowned references in a safe way,
particularly in the free threading build.
Co-authored-by: Petr Viktorin <encukou@gmail.com>
This reduces the size of _PyInterpreterFrame by 8 bytes on 64-bit
platforms using the free threading build due to alignment requirements.
This allows for slightly more recursive calls into the interpreter (from
C), but `test_call.test_super_deep` still crashes.
* Remove all 'if (0)' and 'if (1)' conditional stack effects
* Use array instead of conditional for BUILD_SLICE args
* Refactor LOAD_GLOBAL to use a common conditional uop
* Remove conditional stack effects from LOAD_ATTR specializations
* Replace conditional stack effects in LOAD_ATTR with a 0 or 1 sized array.
* Remove conditional stack effects from CALL_FUNCTION_EX
Clang versions prior to 10 (which corresponds to Apple Clang 12) do not
support the GCC extension syntax __attribute__((fallthrough)), but do
evaluate __has_attribute(fallthrough) to 1 because they support the
C++11 style syntax [[fallthrough]]. The only way to tell if the GCC
style syntax is supported is thus to check the clang version.
Ref: 1e0affb6e5
Remove _PyInterpreterState_GetConfigCopy() and
_PyInterpreterState_SetConfig() private functions. PEP 741 "Python
Configuration C API" added a better public C API: PyConfig_Get() and
PyConfig_Set().
In the free threading build, the per thread reference counting uses a
unique id for some objects to index into the local reference count
table. Use 0 instead of -1 to indicate that the id is not assigned. This
avoids bugs where zero-initialized heap type objects look like they have
a unique id assigned.
* Use TABLES_LOCK() to protect 'tracemalloc_config.tracing'.
* Hold TABLES_LOCK() longer while accessing tables.
* tracemalloc_realloc() and tracemalloc_free() no longer
remove the trace on reentrant call.
* _PyTraceMalloc_Stop() unregisters _PyTraceMalloc_TraceRef().
* _PyTraceMalloc_GetTraces() sets the reentrant flag.
* tracemalloc_clear_traces_unlocked() sets the reentrant flag.
Replaces the trampoline mechanism in Emscripten with an implementation that uses a
recently added feature of wasm-gc instead of JS type reflection, when that feature is
available.
- Unify `get_unicode` and `get_string` in a single function.
- Allow to retrieve the underlying `object` attribute, its
size, and the adjusted 'start' and 'end', all at once.
Add a new `_PyUnicodeError_GetParams` internal function for this.
(In `exceptions.c`, it's somewhat common to not need all the attributes,
but the compiler has opportunity to inline the function and optimize
unneeded work away. Outside that file, we'll usually need all or
most of them at once.)
- Use a common implementation for the following functions:
- `PyUnicode{Decode,Encode}Error_GetEncoding`
- `PyUnicode{Decode,Encode,Translate}Error_GetObject`
- `PyUnicode{Decode,Encode,Translate}Error_{Get,Set}Reason`
- `PyUnicode{Decode,Encode,Translate}Error_{Get,Set}{Start,End}`
Add a fast path to (single-mutex) critical section locking _iff_ the mutex
is already held by the currently active, top-most critical section of this
thread. This can matter a lot for indirectly recursive critical sections
without intervening critical sections.
Methods (functions defined in class scope) are likely to be cleaned
up by the GC anyway.
Add a new code flag, `CO_METHOD`, that is set for functions defined
in a class scope. Use that when deciding to defer functions.
* Add `_PyDictKeys_StringLookupSplit` which does locking on dict keys and
use in place of `_PyDictKeys_StringLookup`.
* Change `_PyObject_TryGetInstanceAttribute` to use that function
in the case of split keys.
* Add `unicodekeys_lookup_split` helper which allows code sharing
between `_Py_dict_lookup` and `_PyDictKeys_StringLookupSplit`.
* Fix locking for `STORE_ATTR_INSTANCE_VALUE`. Create
`_GUARD_TYPE_VERSION_AND_LOCK` uop so that object stays locked and
`tp_version_tag` cannot change.
* Pass `tp_version_tag` to `specialize_dict_access()`, ensuring
the version we store on the cache is the correct one (in case of
it changing during the specalize analysis).
* Split `analyze_descriptor` into `analyze_descriptor_load` and
`analyze_descriptor_store` since those don't share much logic.
Add `descriptor_is_class` helper function.
* In `specialize_dict_access`, double check `_PyObject_GetManagedDict()`
in case we race and dict was materialized before the lock.
* Avoid borrowed references in `_Py_Specialize_StoreAttr()`.
* Use `specialize()` and `unspecialize()` helpers.
* Add unit tests to ensure specializing happens as expected in FT builds.
* Add unit tests to attempt to trigger data races (useful for running under TSAN).
* Add `has_split_table` function to `_testinternalcapi`.
The `PyWeakref_IsDead()` function tests if a weak reference is dead
without any side effects. Although you can also detect if a weak
reference is dead using `PyWeakref_GetRef()`, that function returns a
strong reference that must be `Py_DECREF()`'d, which can introduce side
effects if the last reference is concurrently dropped (at least in the
free threading build).
- Add a helper to set an error from locale-encoded `char*`
- Use the helper for gdbm & dlerror messages
Co-authored-by: Victor Stinner <vstinner@python.org>
We use the same approach that was used for specialization of LOAD_GLOBAL in free-threaded builds:
_CHECK_ATTR_MODULE is renamed to _CHECK_ATTR_MODULE_PUSH_KEYS; it pushes the keys object for the following _LOAD_ATTR_MODULE_FROM_KEYS (nee _LOAD_ATTR_MODULE). This arrangement avoids having to recheck the keys version.
_LOAD_ATTR_MODULE is renamed to _LOAD_ATTR_MODULE_FROM_KEYS; it loads the value from the keys object pushed by the preceding _CHECK_ATTR_MODULE_PUSH_KEYS at the cached index.