Move the following macros , to pycore_pymath.h (internal C API):
* _Py_SET_53BIT_PRECISION_HEADER
* _Py_SET_53BIT_PRECISION_START
* _Py_SET_53BIT_PRECISION_END
PEP 7: add braces to if and "do { ... } while (0)" in these macros.
Move also _Py_get_387controlword() and _Py_set_387controlword()
definitions to pycore_pymath.h. These functions are no longer
exported.
pystrtod.c now includes pycore_pymath.h.
Remove the following math macros using the errno variable:
* Py_ADJUST_ERANGE1()
* Py_ADJUST_ERANGE2()
* Py_OVERFLOWED()
* Py_SET_ERANGE_IF_OVERFLOW()
* Py_SET_ERRNO_ON_MATH_ERROR()
Create pycore_pymath.h internal header file.
Rename Py_ADJUST_ERANGE1() and Py_ADJUST_ERANGE2() to
_Py_ADJUST_ERANGE1() and _Py_ADJUST_ERANGE2(), and convert these
macros to static inline functions.
Move the following macros to pycore_pymath.h:
* _Py_IntegralTypeSigned()
* _Py_IntegralTypeMax()
* _Py_IntegralTypeMin()
* _Py_InIntegralTypeRange()
In the list of generated frozen modules at the top of Tools/scripts/freeze_modules.py, you will find that some of the modules have a different name than the module (or .py file) that is actually frozen. Let's call each case an "alias". Aliases do not come into play until we get to the (generated) list of modules in Python/frozen.c. (The tool for freezing modules, Programs/_freeze_module, is only concerned with the source file, not the module it will be used for.)
Knowledge of which frozen modules are aliases (and the identity of the original module) normally isn't important. However, this information is valuable when we go to set __file__ on frozen stdlib modules. This change updates Tools/scripts/freeze_modules.py to map aliases to the original module name (or None if not a stdlib module) in Python/frozen.c. We also add a helper function in Python/import.c to look up a frozen module's alias and add the result of that function to the frozen info returned from find_frozen().
https://bugs.python.org/issue45020
Before this change we end up duplicating effort and throwing away data in FrozenImporter.find_spec(). Now we do the work once in find_spec() and the only thing we do in FrozenImporter.exec_module() is turn the raw frozen data into a code object and then exec it.
We've added _imp.find_frozen(), add an arg to _imp.get_frozen_object(), and updated FrozenImporter. We've also moved some code around to reduce duplication, get a little more consistency in outcomes, and be more efficient.
Note that this change is mostly necessary if we want to set __file__ on frozen stdlib modules. (See https://bugs.python.org/issue21736.)
https://bugs.python.org/issue45324
Add a private C API for deadlines: add _PyDeadline_Init() and
_PyDeadline_Get() functions.
* Add _PyTime_Add() and _PyTime_Mul() functions which compute t1+t2
and t1*t2 and clamp the result on overflow.
* _PyTime_MulDiv() now uses _PyTime_Add() and _PyTime_Mul().
WaitForSingleObject() accepts timeout in milliseconds in the range
[0; 0xFFFFFFFE] (DWORD type). INFINITE value (0xFFFFFFFF) means no
timeout. 0xFFFFFFFE milliseconds is around 49.7 days.
PY_TIMEOUT_MAX is (0xFFFFFFFE * 1000) milliseconds on Windows, around
49.7 days.
Partially revert commit 37b8294d62.
On Unix, if the sem_clockwait() function is available in the C
library (glibc 2.30 and newer), the threading.Lock.acquire() method
now uses the monotonic clock (time.CLOCK_MONOTONIC) for the timeout,
rather than using the system clock (time.CLOCK_REALTIME), to not be
affected by system clock changes.
configure now checks if the sem_clockwait() function is available.
I've added a number of test-only modules. Some of those cases are covered by the recently frozen stdlib modules (and some will be once we add encodings back in). However, I figured we'd play it safe by having a set of modules guaranteed to be there during tests.
https://bugs.python.org/issue45020
PyThread_acquire_lock_timed() now clamps the timeout into the
[_PyTime_MIN; _PyTime_MAX] range (_PyTime_t type) if it is too large,
rather than calling Py_FatalError() which aborts the process.
PyThread_acquire_lock_timed() no longer uses
MICROSECONDS_TO_TIMESPEC() to compute sem_timedwait() argument, but
_PyTime_GetSystemClock() and _PyTime_AsTimespec_truncate().
Fix _thread.TIMEOUT_MAX value on Windows: the maximum timeout is
0x7FFFFFFF milliseconds (around 24.9 days), not 0xFFFFFFFF
milliseconds (around 49.7 days).
Set PY_TIMEOUT_MAX to 0x7FFFFFFF milliseconds, rather than 0xFFFFFFFF
milliseconds.
Fix PY_TIMEOUT_MAX overflow test: replace (us >= PY_TIMEOUT_MAX) with
(us > PY_TIMEOUT_MAX).
Add pytime_add() and pytime_mul() functions to pytime.c to compute
t+t2 and t*k with clamping to [_PyTime_MIN; _PyTime_MAX].
Fix pytime.h: _PyTime_FromTimeval() is not implemented on Windows.
Add the _PyTime_AsTimespec_clamp() function: similar to
_PyTime_AsTimespec(), but clamp to _PyTime_t min/max and don't raise
an exception.
PyThread_acquire_lock_timed() now uses _PyTime_AsTimespec_clamp() to
remove the Py_UNREACHABLE() code path.
* Add _PyTime_AsTime_t() function.
* Add PY_TIME_T_MIN and PY_TIME_T_MAX constants.
* Replace _PyTime_AsTimeval_noraise() with _PyTime_AsTimeval_clamp().
* Add pytime_divide_round_up() function.
* Fix integer overflow in pytime_divide().
* Add pytime_divmod() function.
During runtime startup we figure out the stdlib dir but currently throw that information away. This change preserves it and exposes it via PyConfig.stdlib_dir, _Py_GetStdlibDir(), and sys._stdlib_dir.
https://bugs.python.org/issue45211
This accomplishes 2 things:
* consolidates some common code between getpath.c and getpathp.c
* makes the helpers available to code in other files
FWIW, the signature of the join_relfile() function (in fileutils.c) intentionally mirrors that of Windows' PathCchCombineEx().
Note that this change is mostly moving code around. No behavior is meant to change.
https://bugs.python.org/issue45211
Mark the following thread_nt.h functions as static:
* AllocNonRecursiveMutex()
* FreeNonRecursiveMutex()
* EnterNonRecursiveMutex()
* LeaveNonRecursiveMutex()
py_win_perf_counter_frequency() no longer checks for
QueryPerformanceFrequency() failure. According to the
QueryPerformanceFrequency() documentation, the function can no longer
fails since Windows XP.
On Windows, time.sleep() now uses a waitable timer which has a
resolution of 100 ns (10^-7 sec). Previously, it had a solution of 1
ms (10^-3 sec).
* On Windows, time.sleep() now calls PyErr_CheckSignals() before
resetting the SIGINT event.
* Add _PyTime_As100Nanoseconds() function.
* Complete and update time.sleep() documentation.
Co-authored-by: Livius <egyszeregy@freemail.hu>
The main advantage is that the files will no longer show up in diffs and PRs. That means, for a PR, the number of files / lines changed will more clearly reflect the actual change. (This is essentially an un-revert of gh-28375.)
https://bugs.python.org/issue45020
The main advantage is that the files will no longer show up in diffs and PRs. That means, for a PR, the number of files / lines changed will more clearly reflect the actual change.
https://bugs.python.org/issue45020
Here's one more small cleanup that should have been in PR gh-28319. We eliminate stdout side-effects from importing the frozen __hello__ module, and update tests accordingly. We also move the module's source file into Lib/ from Toos/freeze/flag.py.
https://bugs.python.org/issue45019
Doing this provides significant performance gains for runtime startup (~15% with all the imported modules frozen). We don't yet freeze all the imported modules because there are a few hiccups in the build systems we need to sort out first. (See bpo-45186 and bpo-45188.)
Note that in PR GH-28320 we added a command-line flag (-X frozen_modules=[on|off]) that allows users to opt out of (or into) using frozen modules. The default is still "off" but we will change it to "on" as soon as we can do it in a way that does not cause contributors pain.
https://bugs.python.org/issue45020
Refactor pytime.c:
* Add pytime_from_nanoseconds() and pytime_as_nanoseconds(),
and use explicitly these functions
* Add two empty lines between functions
* PEP 7: add braces { ... }
* C99: declare variables where they are set
* Rename private functions to lowercase
* Rename error_time_t_overflow() to pytime_time_t_overflow()
* Rename win_perf_counter_frequency() to py_win_perf_counter_frequency()
* py_get_monotonic_clock(): add an assertion to detect overflow when
mach_absolute_time() unsigned uint64_t is casted to _PyTime_t
(signed int64_t).
_testcapi: use _PyTime_FromNanoseconds().
Currently we freeze several modules into the runtime. For each of these modules it is essential to bootstrapping the runtime that they be frozen. Any other stdlib module that we later freeze into the runtime is not essential. We can just as well import from the .py file. This PR lets users explicitly choose which should be used, with the new "-X frozen_modules=[on|off]" CLI flag. The default is "off" for now.
https://bugs.python.org/issue45020
There are a few things I missed in gh-27980. This is a follow-up that will make subsequent PRs cleaner. It includes fixes to tests and tools that reference the frozen modules.
https://bugs.python.org/issue45019
* Constructors of subclasses of some buitin classes (e.g. tuple, list,
frozenset) no longer accept arbitrary keyword arguments.
* Subclass of set can now define a __new__() method with additional
keyword parameters without overriding also __init__().
Release the GIL while performing isatty() system calls on arbitrary
file descriptors. In particular, this affects os.isatty(),
os.device_encoding() and io.TextIOWrapper. By extension,
io.open() in text mode is also affected.
Fix PyAiter_Check to only check for the `__anext__` presense (not for
`__aiter__`). Rename `PyAiter_Check()` to `PyAIter_Check()`,
`PyObject_GetAiter()` -> `PyObject_GetAIter()`.
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
The binhex module, deprecated in Python 3.9, is now removed. The
following binascii functions, deprecated in Python 3.9, are now also
removed:
* a2b_hqx(), b2a_hqx();
* rlecode_hqx(), rledecode_hqx().
The binascii.crc_hqx() function remains available.
Fix indentation of <no Python frame> message in a faulthandler
traceback or a Fatal Python error traceback. Example:
Current thread 0x00007f03896fb740 (most recent call first):
Garbage-collecting
<no Python frame>
Frozen modules must be added to several files in order to work properly. Before this change this had to be done manually. Here we add a tool to generate the relevant lines in those files instead. This helps us avoid mistakes and omissions.
https://bugs.python.org/issue45019
Places the locals between the specials and stack. This is the more "natural" layout for a C struct, makes the code simpler and gives a slight speedup (~1%)
Implements a two steps check in `importlib._bootstrap._find_and_load()` to avoid locking when the module has been already imported and it's ready.
---
Using `importlib.__import__()`, after this, does show a big difference:
Before:
```
$ ./python -c 'import timeit; print(timeit.timeit("__import__(\"timeit\")", setup="from importlib import __import__"))'
15.92248619502061
```
After:
```
$ ./python -c 'import timeit; print(timeit.timeit("__import__(\"timeit\")", setup="from importlib import __import__"))'
1.206068897008663
```
---
* Refactor dispatch logic to make flow of control clearer. Moves lltrace and dxprofile instrumentation into DISPATCH macro.
* Remove switch in interpreter loop when using computed gotos. There is no need for two nearly-duplicate dispatch tables.
* Generalize cache names for LOAD_ATTR to allow store and delete specializations.
* Factor out specialization of attribute dictionary access.
* Specialize STORE_ATTR.
Fix the os.set_inheritable() function on FreeBSD 14 for file
descriptor opened with the O_PATH flag: ignore the EBADF error on
ioctl(), fallback on the fcntl() implementation.
The threading debug (PYTHONTHREADDEBUG environment variable) is
deprecated in Python 3.10 and will be removed in Python 3.12. This
feature requires a debug build of Python.