Add _PyUnicode_FiniTypes() function, called by
finalize_interp_types(). It clears these static types:
* EncodingMapType
* PyFieldNameIter_Type
* PyFormatterIter_Type
_PyStaticType_Dealloc() now does nothing if tp_subclasses
is not NULL.
Add 'static_exceptions' list to factorize code between
_PyExc_InitTypes() and _PyBuiltins_AddExceptions().
_PyExc_InitTypes() does nothing if it's not the main interpreter.
Sort exceptions in Lib/test/exception_hierarchy.txt.
* Move PyContext static types into object.c static_types list.
* Rename PyContextTokenMissing_Type to _PyContextTokenMissing_Type
and declare it in pycore_context.h.
* _PyHamtItems types are no long exported: replace PyAPI_DATA() with
extern.
Add a new _PyType_GetSubclasses() function to get type's subclasses.
_PyType_GetSubclasses(type) returns a list which holds strong
refererences to subclasses. It is safer than iterating on
type->tp_subclasses which yields weak references and can be modified
in the loop.
_PyType_GetSubclasses(type) now holds a reference to the tp_subclasses
dict while creating the list of subclasses.
set_collection_flag_recursive() of _abc.c now uses
_PyType_GetSubclasses().
Add _PyTypes_FiniTypes() best-effort function to clear static types:
don't deallocate a type if it still has subclasses.
remove_subclass() now sets tp_subclasses to NULL when removing the
last subclass.
The _curses module now creates its ncurses_version type as a heap
type using PyStructSequence_NewType(), rather than using a static
type.
* Move _PyStructSequence_FiniType() definition to pycore_structseq.h.
* test.pythoninfo: log curses.ncurses_version.
Add _PyStructSequence_FiniType() and _PyStaticType_Dealloc()
functions to finalize a structseq static type in Py_Finalize().
Currrently, these functions do nothing if Python is built in release
mode.
Clear static types:
* AsyncGenHooksType: sys.set_asyncgen_hooks()
* FlagsType: sys.flags
* FloatInfoType: sys.float_info
* Hash_InfoType: sys.hash_info
* Int_InfoType: sys.int_info
* ThreadInfoType: sys.thread_info
* UnraisableHookArgsType: sys.unraisablehook
* VersionInfoType: sys.version
* WindowsVersionType: sys.getwindowsversion()
* Add RETURN_GENERATOR and JUMP_NO_INTERRUPT opcodes.
* Trim frame and generator by word each.
* Minor refactor of frame.c
* Update test.test_sys to account for smaller frames.
* Treat generator functions as normal functions when evaluating and specializing.
Previously, the main interpreter was allocated on the heap during runtime initialization. Here we instead embed it into _PyRuntimeState, which means it is statically allocated as part of the _PyRuntime global. The same goes for the initial thread state (of each interpreter, including the main one). Consequently there are fewer allocations during runtime/interpreter init, fewer possible failures, and better memory locality.
FYI, this also helps efforts to consolidate globals, which in turns helps work on subinterpreter isolation.
https://bugs.python.org/issue45953
The empty bytes object (b'') and the 256 one-character bytes objects were allocated at runtime init. Now we statically allocate and initialize them.
https://bugs.python.org/issue45953
Move almost all private functions of Include/cpython/fileutils.h to
the internal C API Include/internal/pycore_fileutils.h.
Only keep _Py_fopen_obj() in Include/cpython/fileutils.h, since it's
used by _testcapi which must not use the internal C API.
Move EncodeLocaleEx() and DecodeLocaleEx() functions from _testcapi
to _testinternalcapi, since the C API moved to the internal C API.
This reverts commit ea251806b8.
Keep "assert(interned == NULL);" in _PyUnicode_Fini(), but only for
the main interpreter.
Keep _PyUnicode_ClearInterned() changes avoiding the creation of a
temporary Python list object.
* Do not PUSH/POP traceback or type to the stack as part of exc_info
* Remove exc_traceback and exc_type from _PyErr_StackItem
* Add to what's new, because this change breaks things like Cython
The array of small PyLong objects has been statically declared. Here I also statically initialize them. Consequently they are no longer initialized dynamically during runtime init.
I've also moved them under a new sub-struct in _PyRuntimeState, in preparation for static allocation and initialization of other global objects.
https://bugs.python.org/issue45953
This change is strictly renames and moving code around. It helps in the following ways:
* ensures type-related init functions focus strictly on one of the three aspects (state, objects, types)
* passes in PyInterpreterState * to all those functions, simplifying work on moving types/objects/state to the interpreter
* consistent naming conventions help make what's going on more clear
* keeping API related to a type in the corresponding header file makes it more obvious where to look for it
https://bugs.python.org/issue46008
Previously, basic initialization of PyInterprterState happened in PyInterpreterState_New() (along with allocation and adding the new interpreter to the runtime state). This prevented us from initializing interpreter states that were allocated separately (e.g. statically or in a free list). We've addressed that here by factoring out a separate function just for initialization. We've done the same for PyThreadState. _PyRuntimeState was sorted out when we added it since _PyRuntime is statically allocated. However, here we update the existing init code to line up with the functions for PyInterpreterState and PyThreadState.
https://bugs.python.org/issue46008
PyInterpreterState_Main() is a plain function exposed in the public C-API. For internal usage we can take the more efficient approach in this PR.
https://bugs.python.org/issue46008
This simplifies new_threadstate(). We also rename _PyThreadState_Init() to _PyThreadState_SetCurrent() to reflect what it actually does.
https://bugs.python.org/issue46008
This parallels _PyRuntimeState.interpreters. Doing this helps make it more clear what part of PyInterpreterState relates to its threads.
https://bugs.python.org/issue46008
This falls into the category of keep-allocation-and-initialization separate. It also allows us to use _PyEval_InitState() safely in functions that return void.
https://bugs.python.org/issue46008
* Make generator, coroutine and async gen structs all the same size.
* Store interpreter frame in generator (and coroutine). Reduces the number of allocations neeeded for a generator from two to one.
The getpath.py file is frozen at build time and executed as code over a namespace. It is never imported, nor is it meant to be importable or reusable. However, it should be easier to read, modify, and patch than the previous code.
This commit attempts to preserve every previously tested quirk, but these may be changed in the future to better align platforms.
* Split exit paths into exceptional and non-exceptional.
* Move exit tracing code to individual bytecodes.
* Wrap all trace entry and exit events in macros to make them clearer and easier to enhance.
* Move return sequence into RETURN_VALUE, YIELD_VALUE and YIELD_FROM. Distinguish between normal trace events and dtrace events.
The following internal macros can not longer be used as l-value:
* asdl_seq_GET()
* asdl_seq_GET_UNTYPED()
* asdl_seq_LEN()
They are modified to use the _Py_RVALUE() macro.
Add a new _Py_RVALUE() macro to prevent using an expression as an
l-value.
Replace a "(void)" cast with the _Py_RVALUE() macro in the following
macros:
* PyCell_SET()
* PyList_SET_ITEM()
* PyTuple_SET_ITEM()
* _PyGCHead_SET_FINALIZED()
* _PyGCHead_SET_NEXT()
* asdl_seq_SET()
* asdl_seq_SET_UNTYPED()
Add also parentheses around macro arguments in PyCell_SET() and
PyTuple_SET_ITEM() macros.
* Make internal APIs that take PyFrameConstructor take a PyFunctionObject instead.
* Add reference to function to frame, borrow references to builtins and globals.
* Add COPY_FREE_VARS instruction to allow specialization of calls to inner functions.
Currently custom modules (the array set on PyImport_FrozenModules) replace all the frozen stdlib modules. That can be problematic and is unlikely to be what the user wants. This change treats the custom frozen modules as additions instead. They take precedence over all other frozen modules except for those needed to bootstrap the import system. If the "code" field of an entry in the custom array is NULL then that frozen module is treated as disabled, which allows a custom entry to disable a frozen stdlib module.
This change allows us to get rid of is_essential_frozen_module() and simplifies the logic for which frozen modules should be ignored.
https://bugs.python.org/issue45395
The recently added PyConfig.stdlib_dir was being set with ".." entries. When __file__ was added for from modules this caused a problem on out-of-tree builds. This PR fixes that by normalizing "stdlib_dir" when it is calculated in getpath.c.
https://bugs.python.org/issue45506
Freelists for object structs can now be disabled. A new ``configure``
option ``--without-freelists`` can be used to disable all freelists
except empty tuple singleton. Internal Py*_MAXFREELIST macros can now
be defined as 0 without causing compiler warnings and segfaults.
Signed-off-by: Christian Heimes <christian@python.org>
Move Include/longobject.h non-limited API to a new
Include/cpython/longobject.h header file.
Move the following definitions to the internal C API:
* _PyLong_DigitValue
* _PyLong_FormatAdvancedWriter()
* _PyLong_FormatWriter()
Split header files to move the non-limited API to Include/cpython/:
* Include/warnings.h => Include/cpython/warnings.h
* Include/weakrefobject.h => Include/cpython/weakrefobject.h
Exclude PyWeakref_GET_OBJECT() from the limited C API. It never
worked since the PyWeakReference structure is opaque in the limited C
API.
Move _PyWarnings_Init() and _PyErr_WarnUnawaitedCoroutine() to the
internal C API.
The default was "off". Switching it to "on" means users get the benefit of frozen stdlib modules without having to do anything. There's a special-case for running-in-source-tree, so contributors don't get surprised when their stdlib changes don't get used.
https://bugs.python.org/issue45020
Remove fallbacks for missing round(), copysign() and hypot() in
Python/pymath.c. Python now requires these functions to build.
These fallbacks were needed on Visual Studio 2012 and older. They are
no longer needed since Visual Stuido 2013. Python is now built with
Visual Studio 2017 or newer since Python 3.6.
Convert the result of macros setting variables to void to avoid risks
of misusing them:
* _PyGCHead_SET_NEXT()
* asdl_seq_SET()
* asdl_seq_SET_UNTYPED()
Add PyThreadState_EnterTracing() and PyThreadState_LeaveTracing()
functions to the limited C API to suspend and resume tracing and
profiling.
Add an unit test on the PyThreadState C API to _testcapi.
Add also internal _PyThreadState_DisableTracing() and
_PyThreadState_ResetTracing().
Rename Include/namespaceobject.h to
Include/internal/pycore_namespace.h.
The _testmultiphase extension is now built with the
Py_BUILD_CORE_MODULE macro defined to access _PyNamespace_Type.
object.c: remove unused "pycore_context.h" include.
* Move _PyObject_VectorcallTstate() and _PyObject_FastCallTstate() to
pycore_call.h (internal C API).
* Convert PyObject_CallOneArg(), PyObject_Vectorcall(),
_PyObject_FastCall() and PyVectorcall_Function() static inline
functions to regular functions.
* Add _PyVectorcall_FunctionInline() static inline function.
* PyObject_Vectorcall(), _PyObject_FastCall(), and
PyObject_CallOneArg() now call _PyThreadState_GET() rather
than PyThreadState_Get().
Building Python now requires a C99 <math.h> header file providing
isinf(), isnan() and isfinite() functions.
Remove the Py_FORCE_DOUBLE() macro. It was used by the
Py_IS_INFINITY() macro.
Changes:
* Remove Py_IS_NAN(), Py_IS_INFINITY() and Py_IS_FINITE()
in PC/pyconfig.h.
* Remove the _Py_force_double() function.
* configure no longer checks if math.h defines isinf(), isnan() and
isfinite().
Move Include/pystrhex.h to Include/internal/pycore_strhex.h.
The header file only contains private functions.
The following C extensions are now built with Py_BUILD_CORE_MODULE
macro defined to get access to the internal C API:
* _blake2
* _hashopenssl
* _md5
* _sha1
* _sha3
* _ssl
* binascii
* Never change types' cached keys. It could invalidate inline attribute objects.
* Lazily create object dictionaries.
* Update specialization of LOAD/STORE_ATTR.
* Don't update shared keys version for deletion of value.
* Update gdb support to handle instance values.
* Rename SPLIT_KEYS opcodes to INSTANCE_VALUE.
Redefining the PyThreadState_GET() macro in pycore_pystate.h is
useless since it doesn't affect files not including it. Either use
_PyThreadState_GET() directly, or don't use pycore_pystate.h internal
C API. For example, the _testcapi extension don't use the internal C
API, but use the public PyThreadState_Get() function instead.
Replace PyThreadState_Get() with _PyThreadState_GET(). The
_PyThreadState_GET() macro is more efficient than PyThreadState_Get()
and PyThreadState_GET() function calls which call fail with a fatal
Python error.
posixmodule.c and _ctypes extension now include <windows.h> before
pycore header files (like pycore_call.h).
_PyTraceback_Add() now uses _PyErr_Fetch()/_PyErr_Restore() instead
of PyErr_Fetch()/PyErr_Restore().
The _decimal and _xxsubinterpreters extensions are now built with the
Py_BUILD_CORE_MODULE macro defined to get access to the internal C
API.
* Move _PyObject_CallNoArgs() to pycore_call.h (internal C API).
* _ssl, _sqlite and _testcapi extensions now call the public
PyObject_CallNoArgs() function, rather than _PyObject_CallNoArgs().
* _lsprof extension is now built with Py_BUILD_CORE_MODULE macro
defined to get access to internal _PyObject_CallNoArgs().
Fix typo in the private _PyObject_CallNoArg() function name: rename
it to _PyObject_CallNoArgs() to be consistent with the public
function PyObject_CallNoArgs().
Move definitions of copysign(), round(), hypot(), fmod(), etc. from
pymath.h to pycore_pymath.h. These functions are not exported by
libpython and so must not be part of the C API.
Move the following macros , to pycore_pymath.h (internal C API):
* _Py_SET_53BIT_PRECISION_HEADER
* _Py_SET_53BIT_PRECISION_START
* _Py_SET_53BIT_PRECISION_END
PEP 7: add braces to if and "do { ... } while (0)" in these macros.
Move also _Py_get_387controlword() and _Py_set_387controlword()
definitions to pycore_pymath.h. These functions are no longer
exported.
pystrtod.c now includes pycore_pymath.h.
Remove the following math macros using the errno variable:
* Py_ADJUST_ERANGE1()
* Py_ADJUST_ERANGE2()
* Py_OVERFLOWED()
* Py_SET_ERANGE_IF_OVERFLOW()
* Py_SET_ERRNO_ON_MATH_ERROR()
Create pycore_pymath.h internal header file.
Rename Py_ADJUST_ERANGE1() and Py_ADJUST_ERANGE2() to
_Py_ADJUST_ERANGE1() and _Py_ADJUST_ERANGE2(), and convert these
macros to static inline functions.
Move the following macros to pycore_pymath.h:
* _Py_IntegralTypeSigned()
* _Py_IntegralTypeMax()
* _Py_IntegralTypeMin()
* _Py_InIntegralTypeRange()
In the list of generated frozen modules at the top of Tools/scripts/freeze_modules.py, you will find that some of the modules have a different name than the module (or .py file) that is actually frozen. Let's call each case an "alias". Aliases do not come into play until we get to the (generated) list of modules in Python/frozen.c. (The tool for freezing modules, Programs/_freeze_module, is only concerned with the source file, not the module it will be used for.)
Knowledge of which frozen modules are aliases (and the identity of the original module) normally isn't important. However, this information is valuable when we go to set __file__ on frozen stdlib modules. This change updates Tools/scripts/freeze_modules.py to map aliases to the original module name (or None if not a stdlib module) in Python/frozen.c. We also add a helper function in Python/import.c to look up a frozen module's alias and add the result of that function to the frozen info returned from find_frozen().
https://bugs.python.org/issue45020
During runtime startup we figure out the stdlib dir but currently throw that information away. This change preserves it and exposes it via PyConfig.stdlib_dir, _Py_GetStdlibDir(), and sys._stdlib_dir.
https://bugs.python.org/issue45211
This accomplishes 2 things:
* consolidates some common code between getpath.c and getpathp.c
* makes the helpers available to code in other files
FWIW, the signature of the join_relfile() function (in fileutils.c) intentionally mirrors that of Windows' PathCchCombineEx().
Note that this change is mostly moving code around. No behavior is meant to change.
https://bugs.python.org/issue45211
Currently we freeze several modules into the runtime. For each of these modules it is essential to bootstrapping the runtime that they be frozen. Any other stdlib module that we later freeze into the runtime is not essential. We can just as well import from the .py file. This PR lets users explicitly choose which should be used, with the new "-X frozen_modules=[on|off]" CLI flag. The default is "off" for now.
https://bugs.python.org/issue45020
Places the locals between the specials and stack. This is the more "natural" layout for a C struct, makes the code simpler and gives a slight speedup (~1%)
* Generalize cache names for LOAD_ATTR to allow store and delete specializations.
* Factor out specialization of attribute dictionary access.
* Specialize STORE_ATTR.
* Convert "specials" array to InterpreterFrame struct, adding f_lasti, f_state and other non-debug FrameObject fields to it.
* Refactor, calls pushing the call to the interpreter upward toward _PyEval_Vector.
* Compute f_back when on thread stack, only filling in value when frame object outlives stack invocation.
* Move ownership of InterpreterFrame in generator from frame object to generator object.
* Do not create frame objects for Python calls.
* Do not create frame objects for generators.
This PR is part of PEP 657 and augments the compiler to emit ending
line numbers as well as starting and ending columns from the AST
into compiled code objects. This allows bytecodes to be correlated
to the exact source code ranges that generated them.
This information is made available through the following public APIs:
* The `co_positions` method on code objects.
* The C API function `PyCode_Addr2Location`.
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Co-authored-by: Ammar Askar <ammar@ammaraskar.com>
Add an internal _PyType_AllocNoTrack() function to allocate an object
without tracking it in the GC.
Modify dict_new() to use _PyType_AllocNoTrack(): dict subclasses are
now only tracked once all PyDictObject members are initialized.
Calling _PyObject_GC_UNTRACK() is no longer needed for the dict type.
Similar change in tuple_subtype_new() for tuple subclasses.
Replace tuple_gc_track() with _PyObject_GC_TRACK().
* Specialize obj.__class__ with LOAD_ATTR_SLOT
* Specialize instance attribute lookup with attribute on class, provided attribute on class is not an overriding descriptor.
* Add stat for how many times the unquickened instruction has executed.
Currently, if an arg value escapes (into the closure for an inner function) we end up allocating two indices in the fast locals even though only one gets used. Additionally, using the lower index would be better in some cases, such as with no-arg `super()`. To address this, we update the compiler to fix the offsets so each variable only gets one "fast local". As a consequence, now some cell offsets are interspersed with the locals (only when an arg escapes to an inner function).
https://bugs.python.org/issue43693
* Specialize LOAD_ATTR with LOAD_ATTR_SLOT and LOAD_ATTR_SPLIT_KEYS
* Move dict-common.h to internal/pycore_dict.h
* Add LOAD_ATTR_WITH_HINT specialized opcode.
* Quicken in function if loopy
* Specialize LOAD_ATTR for module attributes.
* Add specialization stats
This was reverted in GH-26596 (commit 6d518bb) due to some bad memory accesses.
* Add the MAKE_CELL opcode. (gh-26396)
The memory accesses have been fixed.
https://bugs.python.org/issue43693
The plan is to eventually make PyCodeObject opaque in the public C-API, with the full struct moved to Include/internal/pycore_code.h. _PyLocalsPlusKinds and _PyLocalsPlusKind started off there but were needed on PyCodeObject, hence the duplication. This led to warnings with some compilers. (Apparently it does not trigger a warning on my install of GCC.)
This change eliminates the superfluous typedef.
https://bugs.python.org/issue43693
This moves logic out of the frame initialization code and into the compiler and eval loop. Doing so simplifies the runtime code and allows us to optimize it better.
https://bugs.python.org/issue43693
These were reverted in gh-26530 (commit 17c4edc) due to refleaks.
* 2c1e258 - Compute deref offsets in compiler (gh-25152)
* b2bf2bc - Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)
This change fixes the refleaks.
https://bugs.python.org/issue43693
* Add co_firstinstr field to code object.
* Implement barebones quickening.
* Use non-quickened bytecode when tracing.
* Add NEWS item
* Add new file to Windows build.
* Don't specialize instructions with EXTENDED_ARG.
* Revert "bpo-43693: Compute deref offsets in compiler (gh-25152)"
This reverts commit b2bf2bc1ec.
* Revert "bpo-43693: Add new internal code objects fields: co_fastlocalnames and co_fastlocalkinds. (gh-26388)"
This reverts commit 2c1e2583fd.
These two commits are breaking the refleak buildbots.
A number of places in the code base (notably ceval.c and frameobject.c) rely on mapping variable names to indices in the frame "locals plus" array (AKA fast locals), and thus opargs. Currently the compiler indirectly encodes that information on the code object as the tuples co_varnames, co_cellvars, and co_freevars. At runtime the dependent code must calculate the proper mapping from those, which isn't ideal and impacts performance-sensitive sections. This is something we can easily address in the compiler instead.
This change addresses the situation by replacing internal use of co_varnames, etc. with a single combined tuple of names in locals-plus order, along with a minimal array mapping each to its kind (local vs. cell vs. free). These two new PyCodeObject fields, co_fastlocalnames and co_fastllocalkinds, are not exposed to Python code for now, but co_varnames, etc. are still available with the same values as before (though computed lazily).
Aside from the (mild) performance impact, there are a number of other benefits:
* there's now a clear, direct relationship between locals-plus and variables
* code that relies on the locals-plus-to-name mapping is simpler
* marshaled code objects are smaller and serialize/de-serialize faster
Also note that we can take this approach further by expanding the possible values in co_fastlocalkinds to include specific argument types (e.g. positional-only, kwargs). Doing so would allow further speed-ups in _PyEval_MakeFrameVector(), which is where args get unpacked into the locals-plus array. It would also allow us to shrink marshaled code objects even further.
https://bugs.python.org/issue43693
Fix a crash at Python exit when a deallocator function removes the
last strong reference to a heap type.
Don't read type memory after calling basedealloc() since
basedealloc() can deallocate the type and free its memory.
_PyMem_IsPtrFreed() argument is now constant.
* Remove 'zombie' frames. We won't need them once we are allocating fixed-size frames.
* Add co_nlocalplus field to code object to avoid recomputing size of locals + frees + cells.
* Move locals, cells and freevars out of frame object into separate memory buffer.
* Use per-threadstate allocated memory chunks for local variables.
* Move globals and builtins from frame object to per-thread stack.
* Move (slow) locals frame object to per-thread stack.
* Move internal frame functions to internal header.
* Add Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING, add to all relevant standard builtin classes.
* Set relevant flags on collections.abc.Sequence and Mapping.
* Use flags in MATCH_SEQUENCE and MATCH_MAPPING opcodes.
* Inherit Py_TPFLAGS_SEQUENCE and Py_TPFLAGS_MAPPING.
* Add NEWS
* Remove interpreter-state map_abc and seq_abc fields.
Faster bz2/lzma/zlib via new output buffering.
Also adds .readall() function to _compression.DecompressReader class
to take best advantage of this in the consume-all-output at once scenario.
Often a 5-20% speedup in common scenarios due to less data copying.
Contributed by Ma Lin.
To improve the user experience understanding what part of the error messages associated with SyntaxErrors is wrong, we can highlight the whole error range and not only place the caret at the first character. In this way:
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^
SyntaxError: Generator expression must be parenthesized
becomes
>>> foo(x, z for z in range(10), t, w)
File "<stdin>", line 1
foo(x, z for z in range(10), t, w)
^^^^^^^^^^^^^^^^^^^^
SyntaxError: Generator expression must be parenthesized