* Replace PyArg_ParseTupleAndKeywords() with _PyArg_ParseStackAndKeywords()
which is more efficient to parse keywords, since it decodes only keywords
(char*) from UTF-8 once, instead of decoding at each call.
* METH_FASTCALL avoids the creation of a temporary tuple to pass positional
arguments.
Patch written by INADA Naoki, pushed by Victor Stinner.
Issue #29286. Run Argument Clinic to get the new faster METH_FASTCALL calling
convention for functions using "boring" positional arguments.
Manually fix _elementtree: _elementtree_XMLParser_doctype() must remain
consistent with the clinic code.
Issue #29227: Inline call_function() into _PyEval_EvalFrameDefault() using
Py_LOCAL_INLINE to reduce the stack consumption.
It reduces the stack consumption, bytes per call, before => after:
test_python_call: 1152 => 1040 (-112 B)
test_python_getitem: 1008 => 976 (-32 B)
test_python_iterator: 1232 => 1120 (-112 B)
=> total: 3392 => 3136 (- 256 B)
Copy and then adapt Python/random.c from default branch. Difference between 3.5
and default branches:
* Python 3.5 only uses getrandom() in non-blocking mode: flags=GRND_NONBLOCK
* If getrandom() fails with EAGAIN: py_getrandom() immediately fails and
remembers that getrandom() doesn't work.
* Python 3.5 has no _PyOS_URandomNonblock() function: _PyOS_URandom()
works in non-blocking mode on Python 3.5
* dev_urandom() now calls py_getentropy(). Prepare the fallback to support
getentropy() failure and falls back on reading from /dev/urandom.
* Simplify dev_urandom(). pyurandom() is now responsible to call getentropy()
or getrandom(). Enhance also dev_urandom() and pyurandom() documentation.
* getrandom() is now preferred over getentropy(). The glibc 2.24 now implements
getentropy() on Linux using the getrandom() syscall. But getentropy()
doesn't support non-blocking mode. Since getrandom() is tried first, it's not
more needed to explicitly exclude getentropy() on Solaris. Replace:
"if defined(HAVE_GETENTROPY) && !defined(sun)"
with "if defined(HAVE_GETENTROPY)"
* Enhance py_getrandom() documentation. py_getentropy() now supports ENOSYS,
EPERM & EINTR
The glibc now implements getentropy() on Linux using the getrandom() syscall.
But getentropy() doesn't support non-blocking mode.
Since getrandom() is tried first, it's not more needed to explicitly exclude
getentropy() on Solaris. Replace:
if defined(HAVE_GETENTROPY) && !defined(sun)
with
if defined(HAVE_GETENTROPY)
Issue #28870: Add a new _PY_FASTCALL_SMALL_STACK constant, size of "small
stacks" allocated on the C stack to pass positional arguments to
_PyObject_FastCall().
_PyObject_Call_Prepend() now uses a small stack of 5 arguments (40 bytes)
instead of 8 (64 bytes), since it is modified to use _PY_FASTCALL_SMALL_STACK.
Special thanks to INADA Naoki for pushing the patch through
the last mile, Serhiy Storchaka for reviewing the code, and to
Victor Stinner for suggesting the idea (originally implemented
in the PyPy project).
The PEP 523 modified PyEval_EvalFrameEx(): it's now an indirection to
interp->eval_frame().
Inline the call in performance critical code. Leave PyEval_EvalFrame()
unchanged, this function is only kept for backward compatibility.
Issue #28915: Replace _PyObject_CallMethodId() with
_PyObject_CallMethodIdObjArgs() in various modules when the format string was
only made of "O" formats, PyObject* arguments.
_PyObject_CallMethodIdObjArgs() avoids the creation of a temporary tuple and
doesn't have to parse a format string.
Issue #28915: Replace _PyObject_CallMethodId() with
_PyObject_CallMethodIdObjArgs() when the format string only use the format 'O'
for objects, like "(O)".
_PyObject_CallMethodIdObjArgs() avoids the code to parse a format string and
avoids the creation of a temporary tuple.
Issue #28915: Py_ssize_t type is better for indexes. The compiler might emit
more efficient code for i++. Py_ssize_t is the type of a PyTuple index for
example.
Replace also "int endchar" with "char endchar".
Issue #28838: Rename parameters of the "calls" functions of the Python C API.
* Rename 'callable_object' and 'func' to 'callable': any Python callable object
is accepted, not only Python functions
* Rename 'method' and 'nameid' to 'name' (method name)
* Rename 'o' to 'obj'
* Move, fix and update documentation of PyObject_CallXXX() functions
in abstract.h
* Update also the documentaton of the C API (update parameter names)
Replace
_PyObject_CallArg1(func, arg)
with
PyObject_CallFunctionObjArgs(func, arg, NULL)
Using the _PyObject_CallArg1() macro increases the usage of the C stack, which
was unexpected and unwanted. PyObject_CallFunctionObjArgs() doesn't have this
issue.
Handling zero-argument super() in __init_subclass__ and
__set_name__ involved moving __class__ initialisation to
type.__new__. This requires cooperation from custom
metaclasses to ensure that the new __classcell__ entry
is passed along appropriately.
The initial implementation of that change resulted in abruptly
broken zero-argument super() support in metaclasses that didn't
adhere to the new requirements (such as Django's metaclass for
Model definitions).
The updated approach adopted here instead emits a deprecation
warning for those cases, and makes them work the same way they
did in Python 3.5.
This patch also improves the related class machinery documentation
to cover these details and to include more reader-friendly
cross-references and index entries.
Issue #28858: The change b9c9691c72c5 introduced a regression. It seems like
_PyObject_CallArg1() uses more stack memory than
PyObject_CallFunctionObjArgs().
Replace
PyObject_CallFunction(func, "O", arg)
and
PyObject_CallFunction(func, "O", arg, NULL)
with
_PyObject_CallArg1(func, arg)
Replace
PyObject_CallFunction(func, NULL)
with
_PyObject_CallNoArg(func)
_PyObject_CallNoArg() and _PyObject_CallArg1() are simpler and don't allocate
memory on the C stack.
* PyObject_CallFunctionObjArgs(func, NULL) => _PyObject_CallNoArg(func)
* PyObject_CallFunctionObjArgs(func, arg, NULL) => _PyObject_CallArg1(func, arg)
PyObject_CallFunctionObjArgs() allocates 40 bytes on the C stack and requires
extra work to "parse" C arguments to build a C array of PyObject*.
_PyObject_CallNoArg() and _PyObject_CallArg1() are simpler and don't allocate
memory on the C stack.
This change is part of the fastcall project. The change on listsort() is
related to the issue #23507.
Issue #28799:
* Remove the PyEval_GetCallStats() function.
* Deprecate the untested and undocumented sys.callstats() function.
* Remove the CALL_PROFILE special build
Use the sys.setprofile() function, cProfile or profile module to profile
function calls.
Issue #28782: Fix a bug in the implementation ``yield from`` when checking
if the next instruction is YIELD_FROM. Regression introduced by WORDCODE
(issue #26647).
Reviewed by Serhiy Storchaka and Yury Selivanov.
Issue #28691: Fix warn_invalid_escape_sequence(): handle correctly
DeprecationWarning raised as an exception. First clear the current exception to
replace the DeprecationWarning exception with a SyntaxError exception.
Unit test written by Serhiy Storchaka.
When Python is not compiled with PGO, the performance of Python on call_simple
and call_method microbenchmarks depend highly on the code placement. In the
worst case, the performance slowdown can be up to 70%.
The GCC __attribute__((hot)) attribute helps to keep hot code close to reduce
the risk of such major slowdown. This attribute is ignored when Python is
compiled with PGO.
The following functions are considered as hot according to statistics collected
by perf record/perf report:
* _PyEval_EvalFrameDefault()
* call_function()
* _PyFunction_FastCall()
* PyFrame_New()
* frame_dealloc()
* PyErr_Occurred()
new exception with setting current exception as __cause__.
_PyErr_FormatFromCause(exception, format, args...) is equivalent to Python
raise exception(format % args) from sys.exc_info()[1]
new exception with setting current exception as __cause__.
_PyErr_FormatFromCause(exception, format, args...) is equivalent to Python
raise exception(format % args) from sys.exc_info()[1]
* BUILD_TUPLE_UNPACK and BUILD_MAP_UNPACK_WITH_CALL no longer generated with
single tuple or dict.
* Restored more informative error messages for incorrect var-positional and
var-keyword arguments.
* Removed code duplications in _PyEval_EvalCodeWithName().
* Removed redundant runtime checks and parameters in _PyStack_AsDict().
* Added a workaround and enabled previously disabled test in test_traceback.
* Removed dead code from the dis module.
The __class__ cell used by zero-argument super() is now initialized
from type.__new__ rather than __build_class__, so class methods
relying on that will now work correctly when called from metaclass
methods during class creation.
Patch by Martin Teichmann.
Issue #27810:
* Modify vgetargskeywordsfast() to work on a C array of PyObject* rather than
working on a tuple directly.
* Add _PyArg_ParseStack()
* Argument Clinic now emits code using the new METH_FASTCALL calling convention
Issue #27810: Add a new calling convention for C functions:
PyObject* func(PyObject *self, PyObject **args,
Py_ssize_t nargs, PyObject *kwnames);
Where args is a C array of positional arguments followed by values of keyword
arguments. nargs is the number of positional arguments, kwnames are keys of
keyword arguments. kwnames can be NULL.
Tested on macOS 10.11 dtrace, Ubuntu 16.04 SystemTap, and libbcc.
Largely based by an initial patch by Jesús Cea Avión, with some
influence from Dave Malcolm's SystemTap patch and Nikhil Benesch's
unification patch.
Things deliberately left out for simplicity:
- ustack helpers, I have no way of testing them at this point since
they are Solaris-specific
- PyFrameObject * in function__entry/function__return, this is
SystemTap-specific
- SPARC support
- dynamic tracing
- sys module dtrace facility introspection
All of those might be added later.
Issue #27830: Add _PyObject_FastCallKeywords(): avoid the creation of a
temporary dictionary for keyword arguments.
Other changes:
* Cleanup call_function() and fast_function() (ex: rename nk to nkwargs)
* Remove now useless do_call(), replaced with _PyObject_FastCallKeywords()
Issue #27213: Rework CALL_FUNCTION* opcodes to produce shorter and more
efficient bytecode:
* CALL_FUNCTION now only accepts position arguments
* CALL_FUNCTION_KW accepts position arguments and keyword arguments, but keys
of keyword arguments are packed into a constant tuple.
* CALL_FUNCTION_EX is the most generic, it expects a tuple and a dict for
positional and keyword arguments.
CALL_FUNCTION_VAR and CALL_FUNCTION_VAR_KW opcodes have been removed.
2 tests of test_traceback are currently broken: skip test, the issue #28050 was
created to track the issue.
Patch by Demur Rumed, design by Serhiy Storchaka, reviewed by Serhiy Storchaka
and Victor Stinner.
symtable_analyze() calls analyze_block() with bound=NULL. Theoretically
that NULL can be passed down to update_symbols(). update_symbols() may
deference NULL and pass it to PySet_Contains()
Issue #27776: The os.urandom() function does now block on Linux 3.17 and newer
until the system urandom entropy pool is initialized to increase the security.
This change is part of the PEP 524.
Don't pass "()" format to PyObject_CallXXX() to call a function without
argument: pass NULL as the format string instead. It avoids to have to parse a
string to produce 0 argument.
Issue #27830: Similar to _PyObject_FastCallDict(), but keyword arguments are
also passed in the same C array than positional arguments, rather than being
passed as a Python dict.
Issue #27809: PyEval_CallObjectWithKeywords() doesn't increment temporary the
reference counter of the args tuple (positional arguments). The caller already
holds a strong reference to it.
This was found by PVS-Studio:
V595 The 'def' pointer was utilized before it was verified
against nullptr. Check lines: 286, 292. pystate.c 286
Initial patch by Christian Heimes.
Issue #27128. When a Python function is called with no arguments, but all
parameters have a default value: use default values as arguments for the fast
path.
Issue #27128: Modify PyEval_CallObjectWithKeywords() to use
_PyObject_FastCall() when args==NULL and kw==NULL. It avoids the creation of a
temporary empty tuple for positional arguments.
Issue #27128: Add _PyObject_FastCall(), a new calling convention avoiding a
temporary tuple to pass positional parameters in most cases, but create a
temporary tuple if needed (ex: for the tp_call slot).
The API is prepared to support keyword parameters, but the full implementation
will come later (_PyFunction_FastCall() doesn't support keyword parameters
yet).
Add also:
* _PyStack_AsTuple() helper function: convert a "stack" of parameters to
a tuple.
* _PyCFunction_FastCall(): fast call implementation for C functions
* _PyFunction_FastCall(): fast call implementation for Python functions
Issue #27558: Fix a SystemError in the implementation of "raise" statement.
In a brand new thread, raise a RuntimeError since there is no active
exception to reraise.
Patch written by Xiang Zhang.
Issue #27128, #18295: replace int type with Py_ssize_t for index variables used
for positional arguments. It should help to avoid integer overflow and help to
emit better machine code for "i++" (no trap needed for overflow).
Make also the total_args variable constant.
* Add comments
* Add empty lines for readability
* PEP 7 style for if block
* Remove useless assert(globals != NULL); (globals is tested a few lines
before)
Modify py_getrandom() to not call PyErr_CheckSignals() if raise is zero.
_PyRandom_Init() is called very early in the Python initialization, so it's
safer to not call PyErr_CheckSignals().
* Add pyurandom() helper function to factorize the code
* don't call Py_FatalError() in helper functions, but only in _PyRandom_Init()
if pyurandom() failed, to uniformize the code
Large sections of repeated lines in tracebacks are now abbreviated as
"[Previous line repeated {count} more times]" by both the traceback
module and the builtin traceback rendering.
Patch by Emanuel Barry.
or builtins for importing submodules or "from import". Fixed a crash if
raise a warning about unabling to resolve package from __spec__ or
__package__.
SHOW_ALLOC_COUNT or SHOW_TRACK_COUNT macros is now off by default. It can
be re-enabled using the "-X showalloccount" option. It now outputs to stderr
instead of stdout.
Issue #27278: Fix os.urandom() implementation using getrandom() on Linux.
Truncate size to INT_MAX and loop until we collected enough random bytes,
instead of casting a directly Py_ssize_t to int.
plat-$(PLATFORM_TRIPLET).
Rename the config directory (LIBPL) from config-$(LDVERSION) to
config-$(LDVERSION)-$(PLATFORM_TRIPLET).
Install the platform specifc _sysconfigdata module into the platform
directory and rename it to include the ABIFLAGS.
Issue #26839: On Linux, os.urandom() now calls getrandom() with GRND_NONBLOCK
to fall back on reading /dev/urandom if the urandom entropy pool is not
initialized yet. Patch written by Colm Buckley.
* Replace PyUnicode_RPartition() with PyUnicode_FindChar() and
PyUnicode_Substring() to avoid the creation of a temporary tuple.
* Use PyUnicode_FromFormat() to build a string and avoid the single_dot ('.')
singleton
Thanks Serhiy Storchaka for your review.
Issue #27057: Fix os.set_inheritable() on Android, ioctl() is blocked by
SELinux and fails with EACCESS. The function now falls back to fcntl().
Patch written by Michał Bednarski.
Issue #26770: set_inheritable() avoids calling fcntl() twice if the FD_CLOEXEC
is already set/cleared. This change only impacts platforms using the fcntl()
implementation of set_inheritable() (not Linux nor Windows).
Issue #26735: Fix os.urandom() on Solaris 11.3 and newer when reading more than
1,024 bytes: call getrandom() multiple times with a limit of 1024 bytes per
call.
Issue #20021: use importlib.machinery to import Lib/opcode.py and not an opcode
module coming from somewhere else. makeopcodetargets.py is part of the Python
build process and it is run by an external Python program, not the built Python
program.
Patch written by Serhiy Storchaka.
* Simply use "import opcode" to import the opcode module instead of tricks
using the imp module
* Use context manager for the output file
* Move code into a new main() function
* Replace assert with a regular if to check the number of arguments
* Import modules at top level
Issue #26637: The importlib module now emits an ImportError rather than a
TypeError if __import__() is tried during the Python shutdown process but
sys.path is already cleared (set to None).
Issue #26588:
* Pass the hash table rather than the key size to hash and compare functions
* _Py_HASHTABLE_READ_KEY() and _Py_HASHTABLE_ENTRY_READ_KEY() macros now expect
the hash table as the first parameter, rather than the key size
* tracemalloc_get_traces_fill(): use _Py_HASHTABLE_ENTRY_READ_DATA() rather
than pointer dereference
* Remove the _Py_HASHTABLE_ENTRY_WRITE_PKEY() macro
* Move "PKEY" and "PDATA" macros inside hashtable.c
Issue #26592: _warnings.warn_explicit() now tries to import the warnings module
(Python implementation) if the source parameter is set to be able to log the
traceback where the source was allocated.
Issue #26604:
* Add a new optional source parameter to _warnings.warn() and warnings.warn()
* Modify asyncore, asyncio and _pyio modules to set the source parameter when
logging a ResourceWarning warning
Issue #26588: hashtable.h now supports keys of any size, not only
sizeof(void*). It allows to support key larger than sizeof(void*), but also to
use less memory for key smaller than sizeof(void*).
Change pushed by mistake, the patch is still under review :-/
"""
_tracemalloc: add domain to trace keys
* hashtable.h: key has now a variable size
* _tracemalloc uses (pointer: void*, domain: unsigned int) as key for traces
"""
Issue #26567:
* Add a new function PyErr_ResourceWarning() function to pass the destroyed
object
* Add a source attribute to warnings.WarningMessage
* Add warnings._showwarnmsg() which uses tracemalloc to get the traceback where
source object was allocated.
Issue #26568: add new _showwarnmsg() and _formatwarnmsg() functions to the
warnings module.
The C function warn_explicit() now calls warnings._showwarnmsg() with a
warnings.WarningMessage as parameter, instead of calling warnings.showwarning()
with multiple parameters.
_showwarnmsg() calls warnings.showwarning() if warnings.showwarning() was
replaced. Same for _formatwarnmsg(): call warnings.formatwarning() if it was
replaced.
Issue #26563:
* Add _PyGILState_GetInterpreterStateUnsafe() function: the single
PyInterpreterState used by this process' GILState implementation.
* Enhance _Py_DumpTracebackThreads() to retrieve the interpreter state from
autoInterpreterState in last resort. The function now accepts NULL for interp
and current_tstate parameters.
* test_faulthandler: fix a ResourceWarning when test is interrupted by CTRL+c
Issue #26538: libregrtest: Fix setup_tests() to keep module.__path__ type
(_NamespacePath), don't convert to a list.
Add _NamespacePath.__setitem__() method to importlib._bootstrap_external.
Issue #26564:
* Expose _Py_DumpASCII() and _Py_DumpDecimal() in traceback.h
* Change the type of the second _Py_DumpASCII() parameter from int to unsigned
long
* Rewrite _Py_DumpDecimal() and dump_hexadecimal() to write directly characters
in the expected order, avoid the need of reversing the string.
* dump_hexadecimal() limits width to the size of the buffer
* _Py_DumpASCII() does nothing if the object is not a Unicode string
* dump_frame() wrtites "???" as the line number if the line number is negative
Issue #10915, #15751, #26558:
* PyGILState_Check() now returns 1 (success) before the creation of the GIL and
after the destruction of the GIL. It allows to use the function early in
Python initialization and late in Python finalization.
* Add a flag to disable PyGILState_Check(). Disable PyGILState_Check() when
Py_NewInterpreter() is called
* Add assert(PyGILState_Check()) to: _Py_dup(), _Py_fstat(), _Py_read()
and _Py_write()
Issue #26558: If Py_FatalError() is called without the GIL, don't try to print
the current exception, nor try to flush stdout and stderr: only dump the
traceback of Python threads.
Issue #26516:
* Add PYTHONMALLOC environment variable to set the Python memory
allocators and/or install debug hooks.
* PyMem_SetupDebugHooks() can now also be used on Python compiled in release
mode.
* The PYTHONMALLOCSTATS environment variable can now also be used on Python
compiled in release mode. It now has no effect if set to an empty string.
* In debug mode, debug hooks are now also installed on Python memory allocators
when Python is configured without pymalloc.
In practice, bytecode instruction arguments are unsigned. Update the assertion
to make it more explicit that argument must be greater or equal than 0.
Rewrite also the comment.
The compile ignores constant statements and emit a SyntaxWarning warning.
Don't emit the warning for string statement because triple quoted string is a
common syntax for multiline comments.
Don't emit the warning on ellipis neither: 'def f(): ...' is a legit syntax for
abstract functions.
Changes:
* test_ast: ignore SyntaxWarning when compiling test statements. Modify
test_load_const() to use assignment expressions rather than constant
expression.
* test_code: add more kinds of constant statements, ignore SyntaxWarning when
testing that the compiler removes constant statements.
* test_grammar: ignore SyntaxWarning on the statement "1"
obj2ast_constant() code is baesd on obj2ast_object() which has a special case
for Py_None. But in practice, we don't need to have a special case for
constants.
Issue noticed by Joseph Jevnik on a review.
Issue #26146: Add a new kind of AST node: ast.Constant. It can be used by
external AST optimizers, but the compiler does not emit directly such node.
An optimizer can replace the following AST nodes with ast.Constant:
* ast.NameConstant: None, False, True
* ast.Num: int, float, complex
* ast.Str: str
* ast.Bytes: bytes
* ast.Tuple if items are constants too: tuple
* frozenset
Update code to accept ast.Constant instead of ast.Num and/or ast.Str:
* compiler
* docstrings
* ast.literal_eval()
* Tools/parser/unparse.py
with no known parent package.
Previously SystemError was raised if the parent package didn't exist
(e.g., __package__ was set to '').
Thanks to Florent Xicluna and Yongzhi Pan for reporting the issue.
In a previous change, __spec__.parent was prioritized over
__package__. That is a backwards-compatibility break, but we do
eventually want __spec__ to be the ground truth for module details. So
this change reverts the change in semantics and instead raises an
ImportWarning when __package__ != __spec__.parent to give people time
to adjust to using spec objects.
Issue #26161: Use Py_uintptr_t instead of void* for atomic pointers in
pyatomic.h. Use atomic_uintptr_t when <stdatomic.h> is used.
Using void* causes compilation warnings depending on which implementation of
atomic types is used.
Issue #25843: When compiling code, don't merge constants if they are equal but
have a different types. For example, "f1, f2 = lambda: 1, lambda: 1.0" is now
correctly compiled to two different functions: f1() returns 1 (int) and f2()
returns 1.0 (int), even if 1 and 1.0 are equal.
Add a new _PyCode_ConstantKey() private function.
Issue #25843: When compiling code, don't merge constants if they are equal but
have a different types. For example, "f1, f2 = lambda: 1, lambda: 1.0" is now
correctly compiled to two different functions: f1() returns 1 (int) and f2()
returns 1.0 (int), even if 1 and 1.0 are equal.
Add a new _PyCode_ConstantKey() private function.
Issue #26107: The format of the co_lnotab attribute of code objects changes to
support negative line number delta.
Changes:
* assemble_lnotab(): if line number delta is less than -128 or greater than
127, emit multiple (offset_delta, lineno_delta) in co_lnotab
* update functions decoding co_lnotab to use signed 8-bit integers
- dis.findlinestarts()
- PyCode_Addr2Line()
- _PyCode_CheckLineNumber()
- frame_setlineno()
* update lnotab_notes.txt
* increase importlib MAGIC_NUMBER to 3361
* document the change in What's New in Python 3.6
* cleanup also PyCode_Optimize() to use better variable names
Issue #26154: Add a new private _PyThreadState_UncheckedGet() function which
gets the current thread state, but don't call Py_FatalError() if it is NULL.
Python 3.5.1 removed the _PyThreadState_Current symbol from the Python C API to
no more expose complex and private atomic types. Atomic types depends on the
compiler or can even depend on compiler options. The new function
_PyThreadState_UncheckedGet() allows to get the variable value without having
to care of the exact implementation of atomic types.
Changes:
* Replace direct usage of the _PyThreadState_Current variable with a call to
_PyThreadState_UncheckedGet().
* In pystate.c, replace direct usage of the _PyThreadState_Current variable
with the PyThreadState_GET() macro for readability.
* Document also PyThreadState_Get() in pystate.h
not defined for a relative import.
This is the start of work to try and clean up import semantics to rely
more on a module's spec than on the myriad attributes that get set on
a module. Thanks to Rose Ames for the patch.
Don't fallback to PyDict_GetItemWithError() if the hash is unknown: compute the
hash instead. Add also comments to explain the optimization a little bit.
This avoids possible buffer overreads when int(), float(), compile(), exec()
and eval() are passed bytes-like objects. Similar code is removed from the
complex() constructor, where it was not reachable.
Patch by John Leitch, Serhiy Storchaka and Martin Panter.
requested name doesn't exist in globals: clear the KeyError exception before
calling PyObject_GetItem(). Fail also if the raised exception is not a
KeyError.
This changes the main documentation, doc strings, source code comments, and a
couple error messages in the test suite. In some cases the word was removed
or edited some other way to fix the grammar.
Issue #25274: sys.setrecursionlimit() now raises a RecursionError if the new
recursion limit is too low depending at the current recursion depth. Modify
also the "lower-water mark" formula to make it monotonic. This mark is used to
decide when the overflowed flag of the thread state is reset.
function instead of the getentropy() function. The getentropy() function is
blocking to generate very good quality entropy, os.urandom() doesn't need such
high-quality entropy.
On the x86 OpenBSD 5.8 buildbot, the integer overflow check is ignored. Copy
the tv_sec variable into a Py_time_t variable instead of "simply" casting it to
Py_time_t, to fix the integer overflow check.
function instead of the getentropy() function. The getentropy() function is
blocking to generate very good quality entropy, os.urandom() doesn't need such
high-quality entropy.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
On Windows, the tv_sec field of the timeval structure has the type C long,
whereas it has the type C time_t on all other platforms. A C long has a size of
32 bits (signed inter, 1 bit for the sign, 31 bits for the value) which is not
enough to store an Epoch timestamp after the year 2038.
Add the _PyTime_AsTimevalTime_t() function written for datetime.datetime.now():
convert a _PyTime_t timestamp to a (secs, us) tuple where secs type is time_t.
It allows to support dates after the year 2038 on Windows.
Enhance also _PyTime_AsTimeval_impl() to detect overflow on the number of
seconds when rounding the number of microseconds.
Overflow test in test_FromSecondsObject() fails on FreeBSD 10.0 buildbot which
uses clang. clang implements more aggressive optimization which gives
different result than GCC on undefined behaviours.
Check if a multiplication will overflow, instead of checking if a
multiplicatin had overflowed, to avoid undefined behaviour.
Add also debug information if the test on overflow fails.
* Filter values which would overflow on conversion to the C long type
(for timeval.tv_sec).
* Adjust also the message of OverflowError on PyTime conversions
* test_time: add debug information if a timestamp conversion fails
Drop all hardcoded tests. Instead, reimplement each function in Python, usually
using decimal.Decimal for the rounding mode.
Add much more values to the dataset. Test various timestamp units from
picroseconds to seconds, in integer and float.
Enhance also _PyTime_AsSecondsDouble().
datetime.datetime now round microseconds to nearest with ties going to nearest
even integer (ROUND_HALF_EVEN), as round(float), instead of rounding towards
-Infinity (ROUND_FLOOR).
pytime API: replace _PyTime_ROUND_HALF_UP with _PyTime_ROUND_HALF_EVEN. Fix
also _PyTime_Divide() for negative numbers.
_PyTime_AsTimeval_impl() now reuses _PyTime_Divide() instead of reimplementing
rounding modes.
Issue #24891: Fix a race condition at Python startup if the file descriptor
of stdin (0), stdout (1) or stderr (2) is closed while Python is creating
sys.stdin, sys.stdout and sys.stderr objects. These attributes are now set
to None if the creation of the object failed, instead of raising an OSError
exception. Initial patch written by Marco Paolini.
Don't check anymore at runtime that the monotonic clock doesn't go backward.
Yes, it happens. It occurs sometimes each month on a Debian buildbot slave
running in a VM.
The problem is that Python cannot do anything useful if a monotonic clock goes
backward. It was decided in the PEP 418 to not fix the system, but only expose
the clock provided by the OS.
with ties going away from zero (ROUND_HALF_UP), as Python 2 and Python older
than 3.3, instead of rounding to nearest with ties going to nearest even
integer (ROUND_HALF_EVEN).
See the latest version of getrandom() manual page:
http://man7.org/linux/man-pages/man2/getrandom.2.html#NOTES
The behavior when a call to getrandom() that is blocked while reading from
/dev/urandom is interrupted by a signal handler depends on the
initialization state of the entropy buffer and on the request size, buflen.
If the entropy is not yet initialized, then the call will fail with the
EINTR error. If the entropy pool has been initialized and the request size
is large (buflen > 256), the call either succeeds, returning a partially
filled buffer, or fails with the error EINTR. If the entropy pool has been
initialized and the request size is small (buflen <= 256), then getrandom()
will not fail with EINTR. Instead, it will return all of the bytes that
have been requested.
Note: py_getrandom() calls getrandom() with flags=0.
Summary of changes:
1. Coroutines now have a distinct, separate from generators
type at the C level: PyGen_Type, and a new typedef PyCoroObject.
PyCoroObject shares the initial segment of struct layout with
PyGenObject, making it possible to reuse existing generators
machinery. The new type is exposed as 'types.CoroutineType'.
As a consequence of having a new type, CO_GENERATOR flag is
no longer applied to coroutines.
2. Having a separate type for coroutines made it possible to add
an __await__ method to the type. Although it is not used by the
interpreter (see details on that below), it makes coroutines
naturally (without using __instancecheck__) conform to
collections.abc.Coroutine and collections.abc.Awaitable ABCs.
[The __instancecheck__ is still used for generator-based
coroutines, as we don't want to add __await__ for generators.]
3. Add new opcode: GET_YIELD_FROM_ITER. The opcode is needed to
allow passing native coroutines to the YIELD_FROM opcode.
Before this change, 'yield from o' expression was compiled to:
(o)
GET_ITER
LOAD_CONST
YIELD_FROM
Now, we use GET_YIELD_FROM_ITER instead of GET_ITER.
The reason for adding a new opcode is that GET_ITER is used
in some contexts (such as 'for .. in' loops) where passing
a coroutine object is invalid.
4. Add two new introspection functions to the inspec module:
getcoroutinestate(c) and getcoroutinelocals(c).
5. inspect.iscoroutine(o) is updated to test if 'o' is a native
coroutine object. Before this commit it used abc.Coroutine,
and it was requested to update inspect.isgenerator(o) to use
abc.Generator; it was decided, however, that inspect functions
should really be tailored for checking for native types.
6. sys.set_coroutine_wrapper(w) API is updated to work with only
native coroutines. Since types.coroutine decorator supports
any type of callables now, it would be confusing that it does
not work for all types of coroutines.
7. Exceptions logic in generators C implementation was updated
to raise clearer messages for coroutines:
Before: TypeError("generator raised StopIteration")
After: TypeError("coroutine raised StopIteration")
Known limitations of the current implementation:
- documentation changes are incomplete
- there's a reference leak I haven't tracked down yet
The leak is most visible by running:
./python -m test -R3:3 test_importlib
However, you can also see it by running:
./python -X showrefcount
Importing the array or _testmultiphase modules, and
then deleting them from both sys.modules and the local
namespace shows significant increases in the total
number of active references each cycle. By contrast,
with _testcapi (which continues to use single-phase
initialisation) the global refcounts stabilise after
a couple of cycles.
* adds missing INCREF in WITH_CLEANUP_START
* adds missing DECREF in WITH_CLEANUP_FINISH
* adds several new tests Yury created while investigating this
CID 1291697 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking tb suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
The concept of .pyo files no longer exists. Now .pyc files have an
optional `opt-` tag which specifies if any extra optimizations beyond
the peepholer were applied.
Add also a new _PyTime_AsMicroseconds() function.
threading.TIMEOUT_MAX is now be smaller: only 292 years instead of 292,271
years on 64-bit system for example. Sorry, your threads will hang a *little
bit* shorter. Call me if you want to ensure that your locks wait longer, I can
share some tricks with you.
* _PyTime_AsTimeval() now ensures that tv_usec is always positive
* _PyTime_AsTimespec() now ensures that tv_nsec is always positive
* _PyTime_AsTimeval() now returns an integer on overflow instead of raising an
exception
* Rename _PyTime_FromObject() to _PyTime_FromSecondsObject()
* Add _PyTime_AsNanosecondsObject() and _testcapi.pytime_fromsecondsobject()
* Add unit tests
In practice, _PyTime_t is a number of nanoseconds. Its C type is a 64-bit
signed number. It's integer value is in the range [-2^63; 2^63-1]. In seconds,
the range is around [-292 years; +292 years]. In term of Epoch timestamp
(1970-01-01), it can store a date between 1677-09-21 and 2262-04-11.
The API has a resolution of 1 nanosecond and use integer number. With a
resolution on 1 nanosecond, 64-bit IEEE 754 floating point numbers loose
precision after 194 days. It's not the case with this API. The drawback is
overflow for values outside [-2^63; 2^63-1], but these values are unlikely for
most Python modules, except of the datetime module.
New functions:
- _PyTime_GetMonotonicClock()
- _PyTime_FromObject()
- _PyTime_AsMilliseconds()
- _PyTime_AsTimeval()
This change uses these new functions in time.sleep() to avoid rounding issues.
The new API will be extended step by step, and the old API will be removed step
by step. Currently, some code is duplicated just to be able to move
incrementally, instead of pushing a large change at once.
Flushing sys.stdout and sys.stderr in Py_FatalError() can call again
Py_FatalError(). Add a reentrant flag to detect this case and just abort at the
second call.
It should help to see exceptions when stderr if buffered: PyErr_Display() calls
sys.stderr.write(), it doesn't write into stderr file descriptor directly.
* Display the current Python stack if an exception was raised but the exception
has no traceback
* Disable faulthandler if an exception was raised (before it was only disabled
if no exception was raised)
* To display the current Python stack, call PyGILState_GetThisThreadState()
which works even if the GIL was released
Flushing sys.stdout and sys.stderr in Py_FatalError() can call again
Py_FatalError(). Add a reentrant flag to detect this case and just abort at the
second call.
sys.stderr
It should help to see exceptions when stderr if buffered: PyErr_Display() calls
sys.stderr.write(), it doesn't write into stderr file descriptor directly.
I expected more users of _Py_wstat(), but in practice it's only used by
Modules/getpath.c. Move the function because it's not needed on Windows.
Windows uses PC/getpathp.c which uses the Win32 API (ex: GetFileAttributesW())
not the POSIX API.
* Display the current Python stack if an exception was raised but the exception
has no traceback
* Disable faulthandler if an exception was raised (before it was only disabled
if no exception was raised)
* To display the current Python stack, call PyGILState_GetThisThreadState()
which works even if the GIL was released
which returned an invalid result (result+error or no result without error) in
the exception message.
Add also unit test to check that the exception contains the name of the
function.
Special case: the final _PyEval_EvalFrameEx() check doesn't mention the
function since it didn't execute a single function but a whole frame.
interrupted by a signal
Add a new _PyTime_AddDouble() function and remove _PyTime_ADD_SECONDS() macro.
The _PyTime_ADD_SECONDS only supported an integer number of seconds, the
_PyTime_AddDouble() has subsecond resolution.
EINTR error and special cases for Windows.
These functions now truncate the length to PY_SSIZE_T_MAX to have a portable
and reliable behaviour. For example, read() result is undefined if counter is
greater than PY_SSIZE_T_MAX on Linux.
available, syscall introduced in the Linux kernel 3.17. It is more reliable
and more secure, because it avoids the need of a file descriptor and waits
until the kernel has enough entropy.
* _Py_open() now raises exceptions on error. If open() fails, it raises an
OSError with the filename.
* _Py_open() now releases the GIL while calling open()
* Add _Py_open_noraise() when _Py_open() cannot be used because the GIL is not
held
raise a SystemError if a function returns a result and raises an exception.
The SystemError is chained to the previous exception.
Refactor also PyObject_Call() and PyCFunction_Call() to make them more readable.
Remove some checks which became useless (duplicate checks).
Change reviewed by Serhiy Storchaka.
At entry, save or swap the exception state even if PyEval_EvalFrameEx() is
called with throwflag=0. At exit, the exception state is now always restored or
swapped, not only if why is WHY_YIELD or WHY_RETURN. Patch co-written with
Antoine Pitrou.
importlib.abc.Loader.exec_module() is also defined.
Before this change, create_module() was optional **and** could return
None to trigger default semantics. This change now reduces the
options for choosing default semantics to one and in the most
backporting-friendly way (define create_module() to return None).
Some time ago we changed the docs to consistently use the term 'bytes-like
object' in all the contexts where bytes, bytearray, memoryview, etc are used.
This patch (by Ezio Melotti) completes that work by changing the error
messages that previously reported that certain types did "not support the
buffer interface" to instead say that a bytes-like object is required. (The
glossary entry for bytes-like object references the discussion of the buffer
protocol in the docs.)
threading.Lock.acquire(), threading.RLock.acquire() and socket operations now
use a monotonic clock, instead of the system clock, when a timeout is used.
This platform exposes the function ioctl(FIOCLEX), but calling it fails with
errno is ENOTTY: "Inappropriate ioctl for device". set_inheritable() now falls
back to the slower fcntl() (F_GETFD and then F_SETFD).
Illumos. This platform exposes the function ioctl(FIOCLEX), but calling it
fails with errno is ENOTTY: "Inappropriate ioctl for device". set_inheritable()
now falls back to the slower fcntl() (F_GETFD and then F_SETFD).
Other changes:
* The whole _PyTime API is private (not defined if Py_LIMITED_API is set)
* _PyTime_gettimeofday_info() also returns -1 on error
* Simplify PyTime_gettimeofday(): only use clock_gettime(CLOCK_REALTIME) or
gettimeofday() on UNIX. Don't fallback to ftime() or time() anymore.
clock_gettime(CLOCK_REALTIME) if available. As a side effect, Python now
depends on the librt library on Solaris and on Linux (only with glibc older
than 2.17).
name, and use it in the representation of a generator (``repr(gen)``). The
default name of the generator (``__name__`` attribute) is now get from the
function instead of the code. Use ``gen.gi_code.co_name`` to get the name of
the code.
Along the way, dismantle importlib._bootstrap._SpecMethods as it was
no longer relevant and constructing the new function required
partially dismantling the class anyway.
in order to have the same resolution as pthreads condition variables.
At the same time, it must be large enough to accept 31 bits of
milliseconds, which is the maximum timeout value in the windows API.
A PY_LONG_LONG of microseconds fullfills both requirements.
This closes issue #20737
in order to have the same resolution as pthreads condition variables.
At the same time, it must be large enough to accept 31 bits of
milliseconds, which is the maximum timeout value in the windows API.
A PY_LONG_LONG of microseconds fullfills both requirements.
This closes issue #20737
in order to have the same resolution as pthreads condition variables.
At the same time, it must be large enough to accept 31 bits of
milliseconds, which is the maximum timeout value in the windows API.
A PY_LONG_LONG of microseconds fullfills both requirements.
This closes issue #20737
- Issue #21274: Define PATH_MAX for GNU/Hurd in Python/pythonrun.c.
- Issue #21276: posixmodule: Don't define USE_XATTRS on KFreeBSD and the Hurd.
- Issue #21275: Fix a socket test on KFreeBSD.
__file__.
This causes _frozen_importlib to no longer have __file__ set as well
as any frozen module imported using imp.init_frozen() (which is
deprecated).
str.encode, bytes.decode and bytearray.decode now use an
internal API to throw LookupError for known non-text encodings,
rather than attempting the encoding or decoding operation and
then throwing a TypeError for an unexpected output type.
The latter mechanism remains in place for third party non-text
encodings.
Backported changeset d68df99d7a57.
now register both filenames in the exception on failure.
This required adding new C API functions allowing OSError exceptions
to reference two filenames instead of one.
The new syntax is highly human readable while still preventing false
positives. The syntax also extends Python syntax to denote "self" and
positional-only parameters, allowing inspect.Signature objects to be
totally accurate for all supported builtins in Python 3.4.
- io.TextIOWrapper (and hence the open() builtin) now use the
internal codec marking system added for issue #19619
- also tweaked the C code to only look up the encoding once,
rather than multiple times
- the existing output type checks remain in place to deal with
unmarked third party codecs.
annotate text signatures in docstrings, resulting in fewer false
positives. "self" parameters are also explicitly marked, allowing
inspect.Signature() to authoritatively detect (and skip) said parameters.
Issue #20326: Argument Clinic now generates separate checksums for the
input and output sections of the block, allowing external tools to verify
that the input has not changed (and thus the output is not out-of-date).
PyMethodDescr_Type, _PyMethodWrapper_Type, and PyWrapperDescr_Type)
have been modified to provide introspection information for builtins.
Also: many additional Lib, test suite, and Argument Clinic fixes.
This code was an artifact of issuing a DeprecationWarning for the lack
of loader.exec_module(). However, we have deferred such warnings to
later Python versions.
Early in the PEP 451 implementation some of the importlib loaders had
their own _get_spec() methods to simplify accommodating them. However,
later implementations removed the need. They simply failed to remove
this code at the same time. :)
In Python 3.3, PyThread_set_key_value() did nothing if the key already exists
(if the current value is a non-NULL pointer).
When _PyGILState_NoteThreadState() is called twice on the same thread with a
different Python thread state, it still keeps the old Python thread state to
keep the old behaviour. Replacing the Python thread state with the new state
introduces new bugs: see issues #10915 and #15751.
the function did nothing if the key already exists (if the current value is a
non-NULL pointer).
_testcapi.run_in_subinterp() now correctly sets the new Python thread state of
the current thread when a subinterpreter is created.
crash when a generator is created in a C thread that is destroyed while the
generator is still used. The issue was that a generator contains a frame, and
the frame kept a reference to the Python state of the destroyed C thread. The
crash occurs when a trace function is setup.
has no concrete GIL. If PyGILState_Ensure() is called from a new thread for the
first time and PyEval_InitThreads() was not called yet, a GIL needs to be
created.
module loaders.
Due to the fact that the call signatures for extension modules and
built-in modules does not allow for the specifying of what module to
initialize and that on Windows all extension modules are built-in
modules, work to clean up built-in and extension module initialization
will have to wait until Python 3.5. Because of this the semantics of
exec_module() would be incorrect, so removing the methods for now is
the best option; load_module() is still used as a fallback by
importlib and so this won't affect semantics.
written in C.
As a part of this, a few doctests have been added to the builtins module
(on hex(), oct(), and bin()), a doctest has been fixed (hopefully on all
platforms) on float, and test_builtins now runs doctests in builtins.
str.encode, bytes.decode and bytearray.decode now use an
internal API to throw LookupError for known non-text encodings,
rather than attempting the encoding or decoding operation and
then throwing a TypeError for an unexpected output type.
The latter mechanism remains in place for third party non-text
encodings.
compiler warnings on Windows 64-bit. Use Py_SAFE_DOWNCAST() where the final
downcast is needed.
The bytecode doesn't support integer parameters larger than 32-bit yet.
The utf-16* and utf-32* encoders no longer allow surrogate code points
(U+D800-U+DFFF) to be encoded.
The utf-32* decoders no longer decode byte sequences that correspond to
surrogate code points.
The surrogatepass error handler now works with the utf-16* and utf-32* codecs.
Based on patches by Victor Stinner and Kang-Hao (Kenny) Lu.
the size of the fullpath buffer, not PATH_MAX. fullpath is declared using
MAXPATHLEN or MAX_PATH depending on the OS, and PATH_MAX is not declared on
IRIX.