Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Conflicts:

drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
  9e26680733 ("bnxt_en: Update firmware call to retrieve TX PTP timestamp")
  9e518f2580 ("bnxt_en: 1PPS functions to configure TSIO pins")
  099fdeda65 ("bnxt_en: Event handler for PPS events")

kernel/bpf/helpers.c
include/linux/bpf-cgroup.h
  a2baf4e8bb ("bpf: Fix potentially incorrect results with bpf_get_local_storage()")
  c7603cfa04 ("bpf: Add ambient BPF runtime context stored in current")

drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
  5957cc557d ("net/mlx5: Set all field of mlx5_irq before inserting it to the xarray")
  2d0b41a376 ("net/mlx5: Refcount mlx5_irq with integer")

MAINTAINERS
  7b637cd52f ("MAINTAINERS: fix Microchip CAN BUS Analyzer Tool entry typo")
  7d901a1e87 ("net: phy: add Maxlinear GPY115/21x/24x driver")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2021-08-13 06:41:22 -07:00
commit f4083a752a
362 changed files with 2986 additions and 1480 deletions

View File

@ -108,7 +108,7 @@ This bump in ABI version is at most once per kernel development cycle.
For example, if current state of ``libbpf.map`` is: For example, if current state of ``libbpf.map`` is:
.. code-block:: c .. code-block:: none
LIBBPF_0.0.1 { LIBBPF_0.0.1 {
global: global:
@ -121,7 +121,7 @@ For example, if current state of ``libbpf.map`` is:
, and a new symbol ``bpf_func_c`` is being introduced, then , and a new symbol ``bpf_func_c`` is being introduced, then
``libbpf.map`` should be changed like this: ``libbpf.map`` should be changed like this:
.. code-block:: c .. code-block:: none
LIBBPF_0.0.1 { LIBBPF_0.0.1 {
global: global:

View File

@ -18,114 +18,5 @@ real, with all the uAPI bits is:
* Route shmem backend over to TTM SYSTEM for discrete * Route shmem backend over to TTM SYSTEM for discrete
* TTM purgeable object support * TTM purgeable object support
* Move i915 buddy allocator over to TTM * Move i915 buddy allocator over to TTM
* MMAP ioctl mode(see `I915 MMAP`_)
* SET/GET ioctl caching(see `I915 SET/GET CACHING`_)
* Send RFC(with mesa-dev on cc) for final sign off on the uAPI * Send RFC(with mesa-dev on cc) for final sign off on the uAPI
* Add pciid for DG1 and turn on uAPI for real * Add pciid for DG1 and turn on uAPI for real
New object placement and region query uAPI
==========================================
Starting from DG1 we need to give userspace the ability to allocate buffers from
device local-memory. Currently the driver supports gem_create, which can place
buffers in system memory via shmem, and the usual assortment of other
interfaces, like dumb buffers and userptr.
To support this new capability, while also providing a uAPI which will work
beyond just DG1, we propose to offer three new bits of uAPI:
DRM_I915_QUERY_MEMORY_REGIONS
-----------------------------
New query ID which allows userspace to discover the list of supported memory
regions(like system-memory and local-memory) for a given device. We identify
each region with a class and instance pair, which should be unique. The class
here would be DEVICE or SYSTEM, and the instance would be zero, on platforms
like DG1.
Side note: The class/instance design is borrowed from our existing engine uAPI,
where we describe every physical engine in terms of its class, and the
particular instance, since we can have more than one per class.
In the future we also want to expose more information which can further
describe the capabilities of a region.
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_memory_class drm_i915_gem_memory_class_instance drm_i915_memory_region_info drm_i915_query_memory_regions
GEM_CREATE_EXT
--------------
New ioctl which is basically just gem_create but now allows userspace to provide
a chain of possible extensions. Note that if we don't provide any extensions and
set flags=0 then we get the exact same behaviour as gem_create.
Side note: We also need to support PXP[1] in the near future, which is also
applicable to integrated platforms, and adds its own gem_create_ext extension,
which basically lets userspace mark a buffer as "protected".
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_create_ext
I915_GEM_CREATE_EXT_MEMORY_REGIONS
----------------------------------
Implemented as an extension for gem_create_ext, we would now allow userspace to
optionally provide an immutable list of preferred placements at creation time,
in priority order, for a given buffer object. For the placements we expect
them each to use the class/instance encoding, as per the output of the regions
query. Having the list in priority order will be useful in the future when
placing an object, say during eviction.
.. kernel-doc:: include/uapi/drm/i915_drm.h
:functions: drm_i915_gem_create_ext_memory_regions
One fair criticism here is that this seems a little over-engineered[2]. If we
just consider DG1 then yes, a simple gem_create.flags or something is totally
all that's needed to tell the kernel to allocate the buffer in local-memory or
whatever. However looking to the future we need uAPI which can also support
upcoming Xe HP multi-tile architecture in a sane way, where there can be
multiple local-memory instances for a given device, and so using both class and
instance in our uAPI to describe regions is desirable, although specifically
for DG1 it's uninteresting, since we only have a single local-memory instance.
Existing uAPI issues
====================
Some potential issues we still need to resolve.
I915 MMAP
---------
In i915 there are multiple ways to MMAP GEM object, including mapping the same
object using different mapping types(WC vs WB), i.e multiple active mmaps per
object. TTM expects one MMAP at most for the lifetime of the object. If it
turns out that we have to backpedal here, there might be some potential
userspace fallout.
I915 SET/GET CACHING
--------------------
In i915 we have set/get_caching ioctl. TTM doesn't let us to change this, but
DG1 doesn't support non-snooped pcie transactions, so we can just always
allocate as WB for smem-only buffers. If/when our hw gains support for
non-snooped pcie transactions then we must fix this mode at allocation time as
a new GEM extension.
This is related to the mmap problem, because in general (meaning, when we're
not running on intel cpus) the cpu mmap must not, ever, be inconsistent with
allocation mode.
Possible idea is to let the kernel picks the mmap mode for userspace from the
following table:
smem-only: WB. Userspace does not need to call clflush.
smem+lmem: We only ever allow a single mode, so simply allocate this as uncached
memory, and always give userspace a WC mapping. GPU still does snooped access
here(assuming we can't turn it off like on DG1), which is a bit inefficient.
lmem only: always WC
This means on discrete you only get a single mmap mode, all others must be
rejected. That's probably going to be a new default mode or something like
that.
Links
=====
[1] https://patchwork.freedesktop.org/series/86798/
[2] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5599#note_553791

View File

@ -191,19 +191,9 @@ nf_flowtable_tcp_timeout - INTEGER (seconds)
TCP connections may be offloaded from nf conntrack to nf flow table. TCP connections may be offloaded from nf conntrack to nf flow table.
Once aged, the connection is returned to nf conntrack with tcp pickup timeout. Once aged, the connection is returned to nf conntrack with tcp pickup timeout.
nf_flowtable_tcp_pickup - INTEGER (seconds)
default 120
TCP connection timeout after being aged from nf flow table offload.
nf_flowtable_udp_timeout - INTEGER (seconds) nf_flowtable_udp_timeout - INTEGER (seconds)
default 30 default 30
Control offload timeout for udp connections. Control offload timeout for udp connections.
UDP connections may be offloaded from nf conntrack to nf flow table. UDP connections may be offloaded from nf conntrack to nf flow table.
Once aged, the connection is returned to nf conntrack with udp pickup timeout. Once aged, the connection is returned to nf conntrack with udp pickup timeout.
nf_flowtable_udp_pickup - INTEGER (seconds)
default 30
UDP connection timeout after being aged from nf flow table offload.

View File

@ -263,7 +263,7 @@ Userspace can also add file descriptors to the notifying process via
``ioctl(SECCOMP_IOCTL_NOTIF_ADDFD)``. The ``id`` member of ``ioctl(SECCOMP_IOCTL_NOTIF_ADDFD)``. The ``id`` member of
``struct seccomp_notif_addfd`` should be the same ``id`` as in ``struct seccomp_notif_addfd`` should be the same ``id`` as in
``struct seccomp_notif``. The ``newfd_flags`` flag may be used to set flags ``struct seccomp_notif``. The ``newfd_flags`` flag may be used to set flags
like O_EXEC on the file descriptor in the notifying process. If the supervisor like O_CLOEXEC on the file descriptor in the notifying process. If the supervisor
wants to inject the file descriptor with a specific number, the wants to inject the file descriptor with a specific number, the
``SECCOMP_ADDFD_FLAG_SETFD`` flag can be used, and set the ``newfd`` member to ``SECCOMP_ADDFD_FLAG_SETFD`` flag can be used, and set the ``newfd`` member to
the specific number to use. If that file descriptor is already open in the the specific number to use. If that file descriptor is already open in the

View File

@ -11347,7 +11347,7 @@ L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/phy/mxl-gpy.c F: drivers/net/phy/mxl-gpy.c
MCAB MICROCHIP CAN BUS ANALYZER TOOL DRIVER MCBA MICROCHIP CAN BUS ANALYZER TOOL DRIVER
R: Yasushi SHOJI <yashi@spacecubics.com> R: Yasushi SHOJI <yashi@spacecubics.com>
L: linux-can@vger.kernel.org L: linux-can@vger.kernel.org
S: Maintained S: Maintained
@ -15823,7 +15823,7 @@ F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml
F: drivers/i2c/busses/i2c-emev2.c F: drivers/i2c/busses/i2c-emev2.c
RENESAS ETHERNET DRIVERS RENESAS ETHERNET DRIVERS
R: Sergei Shtylyov <sergei.shtylyov@gmail.com> R: Sergey Shtylyov <s.shtylyov@omp.ru>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org L: linux-renesas-soc@vger.kernel.org
F: Documentation/devicetree/bindings/net/renesas,*.yaml F: Documentation/devicetree/bindings/net/renesas,*.yaml
@ -17835,7 +17835,7 @@ F: include/linux/sync_file.h
F: include/uapi/linux/sync_file.h F: include/uapi/linux/sync_file.h
SYNOPSYS ARC ARCHITECTURE SYNOPSYS ARC ARCHITECTURE
M: Vineet Gupta <vgupta@synopsys.com> M: Vineet Gupta <vgupta@kernel.org>
L: linux-snps-arc@lists.infradead.org L: linux-snps-arc@lists.infradead.org
S: Supported S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc.git
@ -20037,7 +20037,8 @@ F: Documentation/devicetree/bindings/extcon/wlf,arizona.yaml
F: Documentation/devicetree/bindings/mfd/wlf,arizona.yaml F: Documentation/devicetree/bindings/mfd/wlf,arizona.yaml
F: Documentation/devicetree/bindings/mfd/wm831x.txt F: Documentation/devicetree/bindings/mfd/wm831x.txt
F: Documentation/devicetree/bindings/regulator/wlf,arizona.yaml F: Documentation/devicetree/bindings/regulator/wlf,arizona.yaml
F: Documentation/devicetree/bindings/sound/wlf,arizona.yaml F: Documentation/devicetree/bindings/sound/wlf,*.yaml
F: Documentation/devicetree/bindings/sound/wm*
F: Documentation/hwmon/wm83??.rst F: Documentation/hwmon/wm83??.rst
F: arch/arm/mach-s3c/mach-crag6410* F: arch/arm/mach-s3c/mach-crag6410*
F: drivers/clk/clk-wm83*.c F: drivers/clk/clk-wm83*.c

View File

@ -2,7 +2,7 @@
VERSION = 5 VERSION = 5
PATCHLEVEL = 14 PATCHLEVEL = 14
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc4 EXTRAVERSION = -rc5
NAME = Opossums on Parade NAME = Opossums on Parade
# *DOCUMENTATION* # *DOCUMENTATION*
@ -1316,6 +1316,16 @@ PHONY += scripts_unifdef
scripts_unifdef: scripts_basic scripts_unifdef: scripts_basic
$(Q)$(MAKE) $(build)=scripts scripts/unifdef $(Q)$(MAKE) $(build)=scripts scripts/unifdef
# ---------------------------------------------------------------------------
# Install
# Many distributions have the custom install script, /sbin/installkernel.
# If DKMS is installed, 'make install' will eventually recuses back
# to the this Makefile to build and install external modules.
# Cancel sub_make_done so that options such as M=, V=, etc. are parsed.
install: sub_make_done :=
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Tools # Tools

View File

@ -409,7 +409,7 @@ choice
help help
Depending on the configuration, CPU can contain DSP registers Depending on the configuration, CPU can contain DSP registers
(ACC0_GLO, ACC0_GHI, DSP_BFLY0, DSP_CTRL, DSP_FFT_CTRL). (ACC0_GLO, ACC0_GHI, DSP_BFLY0, DSP_CTRL, DSP_FFT_CTRL).
Bellow is options describing how to handle these registers in Below are options describing how to handle these registers in
interrupt entry / exit and in context switch. interrupt entry / exit and in context switch.
config ARC_DSP_NONE config ARC_DSP_NONE

View File

@ -24,7 +24,7 @@
*/ */
static inline __sum16 csum_fold(__wsum s) static inline __sum16 csum_fold(__wsum s)
{ {
unsigned r = s << 16 | s >> 16; /* ror */ unsigned int r = s << 16 | s >> 16; /* ror */
s = ~s; s = ~s;
s -= r; s -= r;
return s >> 16; return s >> 16;

View File

@ -123,7 +123,7 @@ static const char * const arc_pmu_ev_hw_map[] = {
#define C(_x) PERF_COUNT_HW_CACHE_##_x #define C(_x) PERF_COUNT_HW_CACHE_##_x
#define CACHE_OP_UNSUPPORTED 0xffff #define CACHE_OP_UNSUPPORTED 0xffff
static const unsigned arc_pmu_cache_map[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = { static const unsigned int arc_pmu_cache_map[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
[C(L1D)] = { [C(L1D)] = {
[C(OP_READ)] = { [C(OP_READ)] = {
[C(RESULT_ACCESS)] = PERF_COUNT_ARC_LDC, [C(RESULT_ACCESS)] = PERF_COUNT_ARC_LDC,

View File

@ -57,23 +57,26 @@ void fpu_save_restore(struct task_struct *prev, struct task_struct *next)
void fpu_init_task(struct pt_regs *regs) void fpu_init_task(struct pt_regs *regs)
{ {
const unsigned int fwe = 0x80000000;
/* default rounding mode */ /* default rounding mode */
write_aux_reg(ARC_REG_FPU_CTRL, 0x100); write_aux_reg(ARC_REG_FPU_CTRL, 0x100);
/* set "Write enable" to allow explicit write to exception flags */ /* Initialize to zero: setting requires FWE be set */
write_aux_reg(ARC_REG_FPU_STATUS, 0x80000000); write_aux_reg(ARC_REG_FPU_STATUS, fwe);
} }
void fpu_save_restore(struct task_struct *prev, struct task_struct *next) void fpu_save_restore(struct task_struct *prev, struct task_struct *next)
{ {
struct arc_fpu *save = &prev->thread.fpu; struct arc_fpu *save = &prev->thread.fpu;
struct arc_fpu *restore = &next->thread.fpu; struct arc_fpu *restore = &next->thread.fpu;
const unsigned int fwe = 0x80000000;
save->ctrl = read_aux_reg(ARC_REG_FPU_CTRL); save->ctrl = read_aux_reg(ARC_REG_FPU_CTRL);
save->status = read_aux_reg(ARC_REG_FPU_STATUS); save->status = read_aux_reg(ARC_REG_FPU_STATUS);
write_aux_reg(ARC_REG_FPU_CTRL, restore->ctrl); write_aux_reg(ARC_REG_FPU_CTRL, restore->ctrl);
write_aux_reg(ARC_REG_FPU_STATUS, restore->status); write_aux_reg(ARC_REG_FPU_STATUS, (fwe | restore->status));
} }
#endif #endif

View File

@ -260,7 +260,7 @@ static void init_unwind_hdr(struct unwind_table *table,
{ {
const u8 *ptr; const u8 *ptr;
unsigned long tableSize = table->size, hdrSize; unsigned long tableSize = table->size, hdrSize;
unsigned n; unsigned int n;
const u32 *fde; const u32 *fde;
struct { struct {
u8 version; u8 version;
@ -462,7 +462,7 @@ static uleb128_t get_uleb128(const u8 **pcur, const u8 *end)
{ {
const u8 *cur = *pcur; const u8 *cur = *pcur;
uleb128_t value; uleb128_t value;
unsigned shift; unsigned int shift;
for (shift = 0, value = 0; cur < end; shift += 7) { for (shift = 0, value = 0; cur < end; shift += 7) {
if (shift + 7 > 8 * sizeof(value) if (shift + 7 > 8 * sizeof(value)
@ -483,7 +483,7 @@ static sleb128_t get_sleb128(const u8 **pcur, const u8 *end)
{ {
const u8 *cur = *pcur; const u8 *cur = *pcur;
sleb128_t value; sleb128_t value;
unsigned shift; unsigned int shift;
for (shift = 0, value = 0; cur < end; shift += 7) { for (shift = 0, value = 0; cur < end; shift += 7) {
if (shift + 7 > 8 * sizeof(value) if (shift + 7 > 8 * sizeof(value)
@ -609,7 +609,7 @@ static unsigned long read_pointer(const u8 **pLoc, const void *end,
static signed fde_pointer_type(const u32 *cie) static signed fde_pointer_type(const u32 *cie)
{ {
const u8 *ptr = (const u8 *)(cie + 2); const u8 *ptr = (const u8 *)(cie + 2);
unsigned version = *ptr; unsigned int version = *ptr;
if (*++ptr) { if (*++ptr) {
const char *aug; const char *aug;
@ -904,7 +904,7 @@ int arc_unwind(struct unwind_frame_info *frame)
const u8 *ptr = NULL, *end = NULL; const u8 *ptr = NULL, *end = NULL;
unsigned long pc = UNW_PC(frame) - frame->call_frame; unsigned long pc = UNW_PC(frame) - frame->call_frame;
unsigned long startLoc = 0, endLoc = 0, cfa; unsigned long startLoc = 0, endLoc = 0, cfa;
unsigned i; unsigned int i;
signed ptrType = -1; signed ptrType = -1;
uleb128_t retAddrReg = 0; uleb128_t retAddrReg = 0;
const struct unwind_table *table; const struct unwind_table *table;

View File

@ -88,6 +88,8 @@ SECTIONS
CPUIDLE_TEXT CPUIDLE_TEXT
LOCK_TEXT LOCK_TEXT
KPROBES_TEXT KPROBES_TEXT
IRQENTRY_TEXT
SOFTIRQENTRY_TEXT
*(.fixup) *(.fixup)
*(.gnu.warning) *(.gnu.warning)
} }

View File

@ -1595,7 +1595,7 @@ dcan1: can@0 {
compatible = "ti,am4372-d_can", "ti,am3352-d_can"; compatible = "ti,am4372-d_can", "ti,am3352-d_can";
reg = <0x0 0x2000>; reg = <0x0 0x2000>;
clocks = <&dcan1_fck>; clocks = <&dcan1_fck>;
clock-name = "fck"; clock-names = "fck";
syscon-raminit = <&scm_conf 0x644 1>; syscon-raminit = <&scm_conf 0x644 1>;
interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled"; status = "disabled";

View File

@ -582,7 +582,7 @@ &i2c0 {
status = "okay"; status = "okay";
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&i2c0_pins>; pinctrl-0 = <&i2c0_pins>;
clock-frequency = <400000>; clock-frequency = <100000>;
tps65218: tps65218@24 { tps65218: tps65218@24 {
reg = <0x24>; reg = <0x24>;

View File

@ -388,13 +388,13 @@ MX53_PAD_LVDS0_TX3_P__LDB_LVDS0_TX3 0x80000000
pinctrl_power_button: powerbutgrp { pinctrl_power_button: powerbutgrp {
fsl,pins = < fsl,pins = <
MX53_PAD_SD2_DATA2__GPIO1_13 0x1e4 MX53_PAD_SD2_DATA0__GPIO1_15 0x1e4
>; >;
}; };
pinctrl_power_out: poweroutgrp { pinctrl_power_out: poweroutgrp {
fsl,pins = < fsl,pins = <
MX53_PAD_SD2_DATA0__GPIO1_15 0x1e4 MX53_PAD_SD2_DATA2__GPIO1_13 0x1e4
>; >;
}; };

View File

@ -54,7 +54,13 @@ &fec {
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&pinctrl_microsom_enet_ar8035>; pinctrl-0 = <&pinctrl_microsom_enet_ar8035>;
phy-mode = "rgmii-id"; phy-mode = "rgmii-id";
phy-reset-duration = <2>;
/*
* The PHY seems to require a long-enough reset duration to avoid
* some rare issues where the PHY gets stuck in an inconsistent and
* non-functional state at boot-up. 10ms proved to be fine .
*/
phy-reset-duration = <10>;
phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>; phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>;
status = "okay"; status = "okay";

View File

@ -43,6 +43,7 @@ &usdhc2 {
assigned-clock-rates = <0>, <198000000>; assigned-clock-rates = <0>, <198000000>;
cap-power-off-card; cap-power-off-card;
keep-power-in-suspend; keep-power-in-suspend;
max-frequency = <25000000>;
mmc-pwrseq = <&wifi_pwrseq>; mmc-pwrseq = <&wifi_pwrseq>;
no-1-8-v; no-1-8-v;
non-removable; non-removable;

View File

@ -30,14 +30,6 @@ vsys_cobra: fixedregulator-vsys_cobra {
regulator-max-microvolt = <5000000>; regulator-max-microvolt = <5000000>;
}; };
vdds_1v8_main: fixedregulator-vdds_1v8_main {
compatible = "regulator-fixed";
regulator-name = "vdds_1v8_main";
vin-supply = <&smps7_reg>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
vmmcsd_fixed: fixedregulator-mmcsd { vmmcsd_fixed: fixedregulator-mmcsd {
compatible = "regulator-fixed"; compatible = "regulator-fixed";
regulator-name = "vmmcsd_fixed"; regulator-name = "vmmcsd_fixed";
@ -487,6 +479,7 @@ smps6_reg: smps6 {
regulator-boot-on; regulator-boot-on;
}; };
vdds_1v8_main:
smps7_reg: smps7 { smps7_reg: smps7 {
/* VDDS_1v8_OMAP over VDDS_1v8_MAIN */ /* VDDS_1v8_OMAP over VDDS_1v8_MAIN */
regulator-name = "smps7"; regulator-name = "smps7";

View File

@ -755,14 +755,14 @@ clcd@10120000 {
status = "disabled"; status = "disabled";
}; };
vica: intc@10140000 { vica: interrupt-controller@10140000 {
compatible = "arm,versatile-vic"; compatible = "arm,versatile-vic";
interrupt-controller; interrupt-controller;
#interrupt-cells = <1>; #interrupt-cells = <1>;
reg = <0x10140000 0x20>; reg = <0x10140000 0x20>;
}; };
vicb: intc@10140020 { vicb: interrupt-controller@10140020 {
compatible = "arm,versatile-vic"; compatible = "arm,versatile-vic";
interrupt-controller; interrupt-controller;
#interrupt-cells = <1>; #interrupt-cells = <1>;

View File

@ -37,7 +37,7 @@ gpio-keys-polled {
poll-interval = <20>; poll-interval = <20>;
/* /*
* The EXTi IRQ line 3 is shared with touchscreen and ethernet, * The EXTi IRQ line 3 is shared with ethernet,
* so mark this as polled GPIO key. * so mark this as polled GPIO key.
*/ */
button-0 { button-0 {
@ -46,6 +46,16 @@ button-0 {
gpios = <&gpiof 3 GPIO_ACTIVE_LOW>; gpios = <&gpiof 3 GPIO_ACTIVE_LOW>;
}; };
/*
* The EXTi IRQ line 6 is shared with touchscreen,
* so mark this as polled GPIO key.
*/
button-1 {
label = "TA2-GPIO-B";
linux,code = <KEY_B>;
gpios = <&gpiod 6 GPIO_ACTIVE_LOW>;
};
/* /*
* The EXTi IRQ line 0 is shared with PMIC, * The EXTi IRQ line 0 is shared with PMIC,
* so mark this as polled GPIO key. * so mark this as polled GPIO key.
@ -60,13 +70,6 @@ button-2 {
gpio-keys { gpio-keys {
compatible = "gpio-keys"; compatible = "gpio-keys";
button-1 {
label = "TA2-GPIO-B";
linux,code = <KEY_B>;
gpios = <&gpiod 6 GPIO_ACTIVE_LOW>;
wakeup-source;
};
button-3 { button-3 {
label = "TA4-GPIO-D"; label = "TA4-GPIO-D";
linux,code = <KEY_D>; linux,code = <KEY_D>;
@ -82,6 +85,7 @@ led-0 {
label = "green:led5"; label = "green:led5";
gpios = <&gpioc 6 GPIO_ACTIVE_HIGH>; gpios = <&gpioc 6 GPIO_ACTIVE_HIGH>;
default-state = "off"; default-state = "off";
status = "disabled";
}; };
led-1 { led-1 {
@ -185,8 +189,8 @@ sgtl5000_rx_endpoint: endpoint@1 {
touchscreen@38 { touchscreen@38 {
compatible = "edt,edt-ft5406"; compatible = "edt,edt-ft5406";
reg = <0x38>; reg = <0x38>;
interrupt-parent = <&gpiog>; interrupt-parent = <&gpioc>;
interrupts = <2 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */ interrupts = <6 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */
}; };
}; };

View File

@ -12,6 +12,8 @@ / {
aliases { aliases {
ethernet0 = &ethernet0; ethernet0 = &ethernet0;
ethernet1 = &ksz8851; ethernet1 = &ksz8851;
rtc0 = &hwrtc;
rtc1 = &rtc;
}; };
memory@c0000000 { memory@c0000000 {
@ -138,6 +140,7 @@ phy0: ethernet-phy@1 {
reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>; reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>;
reset-assert-us = <500>; reset-assert-us = <500>;
reset-deassert-us = <500>; reset-deassert-us = <500>;
smsc,disable-energy-detect;
interrupt-parent = <&gpioi>; interrupt-parent = <&gpioi>;
interrupts = <11 IRQ_TYPE_LEVEL_LOW>; interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
}; };
@ -248,7 +251,7 @@ &i2c4 {
/delete-property/dmas; /delete-property/dmas;
/delete-property/dma-names; /delete-property/dma-names;
rtc@32 { hwrtc: rtc@32 {
compatible = "microcrystal,rv8803"; compatible = "microcrystal,rv8803";
reg = <0x32>; reg = <0x32>;
}; };

View File

@ -68,7 +68,6 @@ void imx_set_cpu_arg(int cpu, u32 arg);
void v7_secondary_startup(void); void v7_secondary_startup(void);
void imx_scu_map_io(void); void imx_scu_map_io(void);
void imx_smp_prepare(void); void imx_smp_prepare(void);
void imx_gpcv2_set_core1_pdn_pup_by_software(bool pdn);
#else #else
static inline void imx_scu_map_io(void) {} static inline void imx_scu_map_io(void) {}
static inline void imx_smp_prepare(void) {} static inline void imx_smp_prepare(void) {}
@ -81,6 +80,7 @@ void imx_gpc_mask_all(void);
void imx_gpc_restore_all(void); void imx_gpc_restore_all(void);
void imx_gpc_hwirq_mask(unsigned int hwirq); void imx_gpc_hwirq_mask(unsigned int hwirq);
void imx_gpc_hwirq_unmask(unsigned int hwirq); void imx_gpc_hwirq_unmask(unsigned int hwirq);
void imx_gpcv2_set_core1_pdn_pup_by_software(bool pdn);
void imx_anatop_init(void); void imx_anatop_init(void);
void imx_anatop_pre_suspend(void); void imx_anatop_pre_suspend(void);
void imx_anatop_post_resume(void); void imx_anatop_post_resume(void);

View File

@ -103,6 +103,7 @@ struct mmdc_pmu {
struct perf_event *mmdc_events[MMDC_NUM_COUNTERS]; struct perf_event *mmdc_events[MMDC_NUM_COUNTERS];
struct hlist_node node; struct hlist_node node;
struct fsl_mmdc_devtype_data *devtype_data; struct fsl_mmdc_devtype_data *devtype_data;
struct clk *mmdc_ipg_clk;
}; };
/* /*
@ -462,11 +463,14 @@ static int imx_mmdc_remove(struct platform_device *pdev)
cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node); cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
perf_pmu_unregister(&pmu_mmdc->pmu); perf_pmu_unregister(&pmu_mmdc->pmu);
iounmap(pmu_mmdc->mmdc_base);
clk_disable_unprepare(pmu_mmdc->mmdc_ipg_clk);
kfree(pmu_mmdc); kfree(pmu_mmdc);
return 0; return 0;
} }
static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base) static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base,
struct clk *mmdc_ipg_clk)
{ {
struct mmdc_pmu *pmu_mmdc; struct mmdc_pmu *pmu_mmdc;
char *name; char *name;
@ -494,6 +498,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
} }
mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev); mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
if (mmdc_num == 0) if (mmdc_num == 0)
name = "mmdc"; name = "mmdc";
else else
@ -529,7 +534,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
#else #else
#define imx_mmdc_remove NULL #define imx_mmdc_remove NULL
#define imx_mmdc_perf_init(pdev, mmdc_base) 0 #define imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk) 0
#endif #endif
static int imx_mmdc_probe(struct platform_device *pdev) static int imx_mmdc_probe(struct platform_device *pdev)
@ -567,7 +572,13 @@ static int imx_mmdc_probe(struct platform_device *pdev)
val &= ~(1 << BP_MMDC_MAPSR_PSD); val &= ~(1 << BP_MMDC_MAPSR_PSD);
writel_relaxed(val, reg); writel_relaxed(val, reg);
return imx_mmdc_perf_init(pdev, mmdc_base); err = imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk);
if (err) {
iounmap(mmdc_base);
clk_disable_unprepare(mmdc_ipg_clk);
}
return err;
} }
int imx_mmdc_get_ddr_type(void) int imx_mmdc_get_ddr_type(void)

View File

@ -91,6 +91,7 @@ config MACH_IXDP465
config MACH_GORAMO_MLR config MACH_GORAMO_MLR
bool "GORAMO Multi Link Router" bool "GORAMO Multi Link Router"
depends on IXP4XX_PCI_LEGACY
help help
Say 'Y' here if you want your kernel to support GORAMO Say 'Y' here if you want your kernel to support GORAMO
MultiLink router. MultiLink router.

View File

@ -3776,6 +3776,7 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
struct omap_hwmod_ocp_if *oi; struct omap_hwmod_ocp_if *oi;
struct clockdomain *clkdm; struct clockdomain *clkdm;
struct clk_hw_omap *clk; struct clk_hw_omap *clk;
struct clk_hw *hw;
if (!oh) if (!oh)
return NULL; return NULL;
@ -3792,7 +3793,14 @@ struct powerdomain *omap_hwmod_get_pwrdm(struct omap_hwmod *oh)
c = oi->_clk; c = oi->_clk;
} }
clk = to_clk_hw_omap(__clk_get_hw(c)); hw = __clk_get_hw(c);
if (!hw)
return NULL;
clk = to_clk_hw_omap(hw);
if (!clk)
return NULL;
clkdm = clk->clkdm; clkdm = clk->clkdm;
if (!clkdm) if (!clkdm)
return NULL; return NULL;

View File

@ -1800,11 +1800,11 @@ config RANDOMIZE_BASE
If unsure, say N. If unsure, say N.
config RANDOMIZE_MODULE_REGION_FULL config RANDOMIZE_MODULE_REGION_FULL
bool "Randomize the module region over a 4 GB range" bool "Randomize the module region over a 2 GB range"
depends on RANDOMIZE_BASE depends on RANDOMIZE_BASE
default y default y
help help
Randomizes the location of the module region inside a 4 GB window Randomizes the location of the module region inside a 2 GB window
covering the core kernel. This way, it is less likely for modules covering the core kernel. This way, it is less likely for modules
to leak information about the location of core kernel data structures to leak information about the location of core kernel data structures
but it does imply that function calls between modules and the core but it does imply that function calls between modules and the core
@ -1812,7 +1812,10 @@ config RANDOMIZE_MODULE_REGION_FULL
When this option is not set, the module region will be randomized over When this option is not set, the module region will be randomized over
a limited range that contains the [_stext, _etext] interval of the a limited range that contains the [_stext, _etext] interval of the
core kernel, so branch relocations are always in range. core kernel, so branch relocations are almost always in range unless
ARM64_MODULE_PLTS is enabled and the region is exhausted. In this
particular case of region exhaustion, modules might be able to fall
back to a larger 2GB area.
config CC_HAVE_STACKPROTECTOR_SYSREG config CC_HAVE_STACKPROTECTOR_SYSREG
def_bool $(cc-option,-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=0) def_bool $(cc-option,-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=0)

View File

@ -21,19 +21,11 @@ LDFLAGS_vmlinux += -shared -Bsymbolic -z notext \
endif endif
ifeq ($(CONFIG_ARM64_ERRATUM_843419),y) ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
ifneq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y) ifeq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y)
$(warning ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum)
else
LDFLAGS_vmlinux += --fix-cortex-a53-843419 LDFLAGS_vmlinux += --fix-cortex-a53-843419
endif endif
endif endif
ifeq ($(CONFIG_ARM64_USE_LSE_ATOMICS), y)
ifneq ($(CONFIG_ARM64_LSE_ATOMICS), y)
$(warning LSE atomics not supported by binutils)
endif
endif
cc_has_k_constraint := $(call try-run,echo \ cc_has_k_constraint := $(call try-run,echo \
'int main(void) { \ 'int main(void) { \
asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \ asm volatile("and w0, w0, %w0" :: "K" (4294967295)); \
@ -176,6 +168,17 @@ vdso_install:
archprepare: archprepare:
$(Q)$(MAKE) $(build)=arch/arm64/tools kapi $(Q)$(MAKE) $(build)=arch/arm64/tools kapi
ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
ifneq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y)
@echo "warning: ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum" >&2
endif
endif
ifeq ($(CONFIG_ARM64_USE_LSE_ATOMICS),y)
ifneq ($(CONFIG_ARM64_LSE_ATOMICS),y)
@echo "warning: LSE atomics not supported by binutils" >&2
endif
endif
# We use MRPROPER_FILES and CLEAN_FILES now # We use MRPROPER_FILES and CLEAN_FILES now
archclean: archclean:

View File

@ -54,6 +54,7 @@ &mscc_felix {
&mscc_felix_port0 { &mscc_felix_port0 {
label = "swp0"; label = "swp0";
managed = "in-band-status";
phy-handle = <&phy0>; phy-handle = <&phy0>;
phy-mode = "sgmii"; phy-mode = "sgmii";
status = "okay"; status = "okay";
@ -61,6 +62,7 @@ &mscc_felix_port0 {
&mscc_felix_port1 { &mscc_felix_port1 {
label = "swp1"; label = "swp1";
managed = "in-band-status";
phy-handle = <&phy1>; phy-handle = <&phy1>;
phy-mode = "sgmii"; phy-mode = "sgmii";
status = "okay"; status = "okay";

View File

@ -66,7 +66,7 @@ CPU_PW20: cpu-pw20 {
}; };
}; };
sysclk: clock-sysclk { sysclk: sysclk {
compatible = "fixed-clock"; compatible = "fixed-clock";
#clock-cells = <0>; #clock-cells = <0>;
clock-frequency = <100000000>; clock-frequency = <100000000>;

View File

@ -19,6 +19,8 @@ / {
aliases { aliases {
spi0 = &spi0; spi0 = &spi0;
ethernet1 = &eth1; ethernet1 = &eth1;
mmc0 = &sdhci0;
mmc1 = &sdhci1;
}; };
chosen { chosen {
@ -119,6 +121,7 @@ &i2c0 {
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&i2c1_pins>; pinctrl-0 = <&i2c1_pins>;
clock-frequency = <100000>; clock-frequency = <100000>;
/delete-property/ mrvl,i2c-fast-mode;
status = "okay"; status = "okay";
rtc@6f { rtc@6f {

View File

@ -1840,7 +1840,11 @@ pcie@14100000 {
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE1R &emc>, interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE1R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE1W &emc>; <&mc TEGRA194_MEMORY_CLIENT_PCIE1W &emc>;
interconnect-names = "read", "write"; interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE1>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE1 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
pcie@14120000 { pcie@14120000 {
@ -1890,7 +1894,11 @@ pcie@14120000 {
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE2AR &emc>, interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE2AR &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE2AW &emc>; <&mc TEGRA194_MEMORY_CLIENT_PCIE2AW &emc>;
interconnect-names = "read", "write"; interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE2>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE2 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
pcie@14140000 { pcie@14140000 {
@ -1940,7 +1948,11 @@ pcie@14140000 {
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE3R &emc>, interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE3R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE3W &emc>; <&mc TEGRA194_MEMORY_CLIENT_PCIE3W &emc>;
interconnect-names = "read", "write"; interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE3>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE3 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
pcie@14160000 { pcie@14160000 {
@ -1990,7 +2002,11 @@ pcie@14160000 {
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE4R &emc>, interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE4R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE4W &emc>; <&mc TEGRA194_MEMORY_CLIENT_PCIE4W &emc>;
interconnect-names = "read", "write"; interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE4>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE4 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
pcie@14180000 { pcie@14180000 {
@ -2040,7 +2056,11 @@ pcie@14180000 {
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE0R &emc>, interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE0R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE0W &emc>; <&mc TEGRA194_MEMORY_CLIENT_PCIE0W &emc>;
interconnect-names = "read", "write"; interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE0>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE0 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
pcie@141a0000 { pcie@141a0000 {
@ -2094,7 +2114,11 @@ pcie@141a0000 {
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE5R &emc>, interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE5R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE5W &emc>; <&mc TEGRA194_MEMORY_CLIENT_PCIE5W &emc>;
interconnect-names = "read", "write"; interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE5>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE5 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
pcie_ep@14160000 { pcie_ep@14160000 {
@ -2127,6 +2151,14 @@ pcie_ep@14160000 {
nvidia,aspm-cmrt-us = <60>; nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>; nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>; nvidia,aspm-l0s-entrance-latency-us = <3>;
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE4R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE4W &emc>;
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE4>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE4 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
pcie_ep@14180000 { pcie_ep@14180000 {
@ -2159,6 +2191,14 @@ pcie_ep@14180000 {
nvidia,aspm-cmrt-us = <60>; nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>; nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>; nvidia,aspm-l0s-entrance-latency-us = <3>;
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE0R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE0W &emc>;
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE0>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE0 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
pcie_ep@141a0000 { pcie_ep@141a0000 {
@ -2194,6 +2234,14 @@ pcie_ep@141a0000 {
nvidia,aspm-cmrt-us = <60>; nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>; nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>; nvidia,aspm-l0s-entrance-latency-us = <3>;
interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE5R &emc>,
<&mc TEGRA194_MEMORY_CLIENT_PCIE5W &emc>;
interconnect-names = "dma-mem", "write";
iommus = <&smmu TEGRA194_SID_PCIE5>;
iommu-map = <0x0 &smmu TEGRA194_SID_PCIE5 0x1000>;
iommu-map-mask = <0x0>;
dma-coherent;
}; };
sram@40000000 { sram@40000000 {

View File

@ -320,7 +320,17 @@ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
static inline unsigned long regs_return_value(struct pt_regs *regs) static inline unsigned long regs_return_value(struct pt_regs *regs)
{ {
return regs->regs[0]; unsigned long val = regs->regs[0];
/*
* Audit currently uses regs_return_value() instead of
* syscall_get_return_value(). Apply the same sign-extension here until
* audit is updated to use syscall_get_return_value().
*/
if (compat_user_mode(regs))
val = sign_extend64(val, 31);
return val;
} }
static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc) static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)

View File

@ -35,7 +35,7 @@ struct stack_info {
* accounting information necessary for robust unwinding. * accounting information necessary for robust unwinding.
* *
* @fp: The fp value in the frame record (or the real fp) * @fp: The fp value in the frame record (or the real fp)
* @pc: The fp value in the frame record (or the real lr) * @pc: The lr value in the frame record (or the real lr)
* *
* @stacks_done: Stacks which have been entirely unwound, for which it is no * @stacks_done: Stacks which have been entirely unwound, for which it is no
* longer valid to unwind to. * longer valid to unwind to.

View File

@ -29,24 +29,25 @@ static inline void syscall_rollback(struct task_struct *task,
regs->regs[0] = regs->orig_x0; regs->regs[0] = regs->orig_x0;
} }
static inline long syscall_get_return_value(struct task_struct *task,
struct pt_regs *regs)
{
unsigned long val = regs->regs[0];
if (is_compat_thread(task_thread_info(task)))
val = sign_extend64(val, 31);
return val;
}
static inline long syscall_get_error(struct task_struct *task, static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs) struct pt_regs *regs)
{ {
unsigned long error = regs->regs[0]; unsigned long error = syscall_get_return_value(task, regs);
if (is_compat_thread(task_thread_info(task)))
error = sign_extend64(error, 31);
return IS_ERR_VALUE(error) ? error : 0; return IS_ERR_VALUE(error) ? error : 0;
} }
static inline long syscall_get_return_value(struct task_struct *task,
struct pt_regs *regs)
{
return regs->regs[0];
}
static inline void syscall_set_return_value(struct task_struct *task, static inline void syscall_set_return_value(struct task_struct *task,
struct pt_regs *regs, struct pt_regs *regs,
int error, long val) int error, long val)

View File

@ -162,7 +162,9 @@ u64 __init kaslr_early_init(void)
* a PAGE_SIZE multiple in the range [_etext - MODULES_VSIZE, * a PAGE_SIZE multiple in the range [_etext - MODULES_VSIZE,
* _stext) . This guarantees that the resulting region still * _stext) . This guarantees that the resulting region still
* covers [_stext, _etext], and that all relative branches can * covers [_stext, _etext], and that all relative branches can
* be resolved without veneers. * be resolved without veneers unless this region is exhausted
* and we fall back to a larger 2GB window in module_alloc()
* when ARM64_MODULE_PLTS is enabled.
*/ */
module_range = MODULES_VSIZE - (u64)(_etext - _stext); module_range = MODULES_VSIZE - (u64)(_etext - _stext);
module_alloc_base = (u64)_etext + offset - MODULES_VSIZE; module_alloc_base = (u64)_etext + offset - MODULES_VSIZE;

View File

@ -1862,7 +1862,7 @@ void syscall_trace_exit(struct pt_regs *regs)
audit_syscall_exit(regs); audit_syscall_exit(regs);
if (flags & _TIF_SYSCALL_TRACEPOINT) if (flags & _TIF_SYSCALL_TRACEPOINT)
trace_sys_exit(regs, regs_return_value(regs)); trace_sys_exit(regs, syscall_get_return_value(current, regs));
if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP)) if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT); tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT);

View File

@ -29,6 +29,7 @@
#include <asm/unistd.h> #include <asm/unistd.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/syscall.h>
#include <asm/signal32.h> #include <asm/signal32.h>
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/vdso.h> #include <asm/vdso.h>
@ -890,7 +891,7 @@ static void do_signal(struct pt_regs *regs)
retval == -ERESTART_RESTARTBLOCK || retval == -ERESTART_RESTARTBLOCK ||
(retval == -ERESTARTSYS && (retval == -ERESTARTSYS &&
!(ksig.ka.sa.sa_flags & SA_RESTART)))) { !(ksig.ka.sa.sa_flags & SA_RESTART)))) {
regs->regs[0] = -EINTR; syscall_set_return_value(current, regs, -EINTR, 0);
regs->pc = continue_addr; regs->pc = continue_addr;
} }

View File

@ -218,7 +218,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
#ifdef CONFIG_STACKTRACE #ifdef CONFIG_STACKTRACE
noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
void *cookie, struct task_struct *task, void *cookie, struct task_struct *task,
struct pt_regs *regs) struct pt_regs *regs)
{ {

View File

@ -54,10 +54,7 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
ret = do_ni_syscall(regs, scno); ret = do_ni_syscall(regs, scno);
} }
if (is_compat_task()) syscall_set_return_value(current, regs, 0, ret);
ret = lower_32_bits(ret);
regs->regs[0] = ret;
/* /*
* Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
@ -115,7 +112,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
* syscall. do_notify_resume() will send a signal to userspace * syscall. do_notify_resume() will send a signal to userspace
* before the syscall is restarted. * before the syscall is restarted.
*/ */
regs->regs[0] = -ERESTARTNOINTR; syscall_set_return_value(current, regs, -ERESTARTNOINTR, 0);
return; return;
} }
@ -136,7 +133,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
* anyway. * anyway.
*/ */
if (scno == NO_SYSCALL) if (scno == NO_SYSCALL)
regs->regs[0] = -ENOSYS; syscall_set_return_value(current, regs, -ENOSYS, 0);
scno = syscall_trace_enter(regs); scno = syscall_trace_enter(regs);
if (scno == NO_SYSCALL) if (scno == NO_SYSCALL)
goto trace_exit; goto trace_exit;

View File

@ -321,7 +321,7 @@ KBUILD_LDFLAGS += -m $(ld-emul)
ifdef CONFIG_MIPS ifdef CONFIG_MIPS
CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \ CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \ egrep -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g') sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
endif endif

View File

@ -58,15 +58,20 @@ do { \
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
{ {
pmd_t *pmd = NULL; pmd_t *pmd;
struct page *pg; struct page *pg;
pg = alloc_pages(GFP_KERNEL | __GFP_ACCOUNT, PMD_ORDER); pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_ORDER);
if (pg) { if (!pg)
pgtable_pmd_page_ctor(pg); return NULL;
pmd = (pmd_t *)page_address(pg);
pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table); if (!pgtable_pmd_page_ctor(pg)) {
__free_pages(pg, PMD_ORDER);
return NULL;
} }
pmd = (pmd_t *)page_address(pg);
pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table);
return pmd; return pmd;
} }

View File

@ -48,7 +48,8 @@ static struct plat_serial8250_port uart8250_data[] = {
.mapbase = 0x1f000900, /* The CBUS UART */ .mapbase = 0x1f000900, /* The CBUS UART */
.irq = MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2, .irq = MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2,
.uartclk = 3686400, /* Twice the usual clk! */ .uartclk = 3686400, /* Twice the usual clk! */
.iotype = UPIO_MEM32, .iotype = IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) ?
UPIO_MEM32BE : UPIO_MEM32,
.flags = CBUS_UART_FLAGS, .flags = CBUS_UART_FLAGS,
.regshift = 3, .regshift = 3,
}, },

View File

@ -492,10 +492,16 @@ config CC_HAVE_STACKPROTECTOR_TLS
config STACKPROTECTOR_PER_TASK config STACKPROTECTOR_PER_TASK
def_bool y def_bool y
depends on !GCC_PLUGIN_RANDSTRUCT
depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_TLS depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_TLS
config PHYS_RAM_BASE_FIXED
bool "Explicitly specified physical RAM address"
default n
config PHYS_RAM_BASE config PHYS_RAM_BASE
hex "Platform Physical RAM address" hex "Platform Physical RAM address"
depends on PHYS_RAM_BASE_FIXED
default "0x80000000" default "0x80000000"
help help
This is the physical address of RAM in the system. It has to be This is the physical address of RAM in the system. It has to be
@ -508,6 +514,7 @@ config XIP_KERNEL
# This prevents XIP from being enabled by all{yes,mod}config, which # This prevents XIP from being enabled by all{yes,mod}config, which
# fail to build since XIP doesn't support large kernels. # fail to build since XIP doesn't support large kernels.
depends on !COMPILE_TEST depends on !COMPILE_TEST
select PHYS_RAM_BASE_FIXED
help help
Execute-In-Place allows the kernel to run from non-volatile storage Execute-In-Place allows the kernel to run from non-volatile storage
directly addressable by the CPU, such as NOR flash. This saves RAM directly addressable by the CPU, such as NOR flash. This saves RAM

View File

@ -24,7 +24,7 @@ cpus {
memory@80000000 { memory@80000000 {
device_type = "memory"; device_type = "memory";
reg = <0x0 0x80000000 0x2 0x00000000>; reg = <0x0 0x80000000 0x4 0x00000000>;
}; };
soc { soc {

View File

@ -103,6 +103,7 @@ struct kernel_mapping {
}; };
extern struct kernel_mapping kernel_map; extern struct kernel_mapping kernel_map;
extern phys_addr_t phys_ram_base;
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
#define is_kernel_mapping(x) \ #define is_kernel_mapping(x) \
@ -113,9 +114,9 @@ extern struct kernel_mapping kernel_map;
#define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + kernel_map.va_pa_offset)) #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + kernel_map.va_pa_offset))
#define kernel_mapping_pa_to_va(y) ({ \ #define kernel_mapping_pa_to_va(y) ({ \
unsigned long _y = y; \ unsigned long _y = y; \
(_y >= CONFIG_PHYS_RAM_BASE) ? \ (IS_ENABLED(CONFIG_XIP_KERNEL) && _y < phys_ram_base) ? \
(void *)((unsigned long)(_y) + kernel_map.va_kernel_pa_offset + XIP_OFFSET) : \ (void *)((unsigned long)(_y) + kernel_map.va_kernel_xip_pa_offset) : \
(void *)((unsigned long)(_y) + kernel_map.va_kernel_xip_pa_offset); \ (void *)((unsigned long)(_y) + kernel_map.va_kernel_pa_offset + XIP_OFFSET); \
}) })
#define __pa_to_va_nodebug(x) linear_mapping_pa_to_va(x) #define __pa_to_va_nodebug(x) linear_mapping_pa_to_va(x)

View File

@ -27,7 +27,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
fp = frame_pointer(regs); fp = frame_pointer(regs);
sp = user_stack_pointer(regs); sp = user_stack_pointer(regs);
pc = instruction_pointer(regs); pc = instruction_pointer(regs);
} else if (task == current) { } else if (task == NULL || task == current) {
fp = (unsigned long)__builtin_frame_address(1); fp = (unsigned long)__builtin_frame_address(1);
sp = (unsigned long)__builtin_frame_address(0); sp = (unsigned long)__builtin_frame_address(0);
pc = (unsigned long)__builtin_return_address(0); pc = (unsigned long)__builtin_return_address(0);

View File

@ -36,6 +36,9 @@ EXPORT_SYMBOL(kernel_map);
#define kernel_map (*(struct kernel_mapping *)XIP_FIXUP(&kernel_map)) #define kernel_map (*(struct kernel_mapping *)XIP_FIXUP(&kernel_map))
#endif #endif
phys_addr_t phys_ram_base __ro_after_init;
EXPORT_SYMBOL(phys_ram_base);
#ifdef CONFIG_XIP_KERNEL #ifdef CONFIG_XIP_KERNEL
extern char _xiprom[], _exiprom[]; extern char _xiprom[], _exiprom[];
#endif #endif
@ -160,7 +163,7 @@ static void __init setup_bootmem(void)
phys_addr_t vmlinux_end = __pa_symbol(&_end); phys_addr_t vmlinux_end = __pa_symbol(&_end);
phys_addr_t vmlinux_start = __pa_symbol(&_start); phys_addr_t vmlinux_start = __pa_symbol(&_start);
phys_addr_t __maybe_unused max_mapped_addr; phys_addr_t __maybe_unused max_mapped_addr;
phys_addr_t dram_end; phys_addr_t phys_ram_end;
#ifdef CONFIG_XIP_KERNEL #ifdef CONFIG_XIP_KERNEL
vmlinux_start = __pa_symbol(&_sdata); vmlinux_start = __pa_symbol(&_sdata);
@ -181,9 +184,12 @@ static void __init setup_bootmem(void)
#endif #endif
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
dram_end = memblock_end_of_DRAM();
phys_ram_end = memblock_end_of_DRAM();
#ifndef CONFIG_64BIT #ifndef CONFIG_64BIT
#ifndef CONFIG_XIP_KERNEL
phys_ram_base = memblock_start_of_DRAM();
#endif
/* /*
* memblock allocator is not aware of the fact that last 4K bytes of * memblock allocator is not aware of the fact that last 4K bytes of
* the addressable memory can not be mapped because of IS_ERR_VALUE * the addressable memory can not be mapped because of IS_ERR_VALUE
@ -194,12 +200,12 @@ static void __init setup_bootmem(void)
* be done in create_kernel_page_table. * be done in create_kernel_page_table.
*/ */
max_mapped_addr = __pa(~(ulong)0); max_mapped_addr = __pa(~(ulong)0);
if (max_mapped_addr == (dram_end - 1)) if (max_mapped_addr == (phys_ram_end - 1))
memblock_set_current_limit(max_mapped_addr - 4096); memblock_set_current_limit(max_mapped_addr - 4096);
#endif #endif
min_low_pfn = PFN_UP(memblock_start_of_DRAM()); min_low_pfn = PFN_UP(phys_ram_base);
max_low_pfn = max_pfn = PFN_DOWN(dram_end); max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end);
dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn)); dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn));
set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET); set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET);
@ -558,6 +564,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
kernel_map.xiprom = (uintptr_t)CONFIG_XIP_PHYS_ADDR; kernel_map.xiprom = (uintptr_t)CONFIG_XIP_PHYS_ADDR;
kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom); kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom);
phys_ram_base = CONFIG_PHYS_RAM_BASE;
kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE; kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE;
kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_sdata); kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_sdata);

View File

@ -2489,13 +2489,15 @@ void perf_clear_dirty_counters(void)
return; return;
for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) { for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) {
/* Metrics and fake events don't have corresponding HW counters. */ if (i >= INTEL_PMC_IDX_FIXED) {
if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR)) /* Metrics and fake events don't have corresponding HW counters. */
continue; if ((i - INTEL_PMC_IDX_FIXED) >= hybrid(cpuc->pmu, num_counters_fixed))
else if (i >= INTEL_PMC_IDX_FIXED) continue;
wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0); wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0);
else } else {
wrmsrl(x86_pmu_event_addr(i), 0); wrmsrl(x86_pmu_event_addr(i), 0);
}
} }
bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX); bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);

View File

@ -2904,24 +2904,28 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
*/ */
static int intel_pmu_handle_irq(struct pt_regs *regs) static int intel_pmu_handle_irq(struct pt_regs *regs)
{ {
struct cpu_hw_events *cpuc; struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
bool late_ack = hybrid_bit(cpuc->pmu, late_ack);
bool mid_ack = hybrid_bit(cpuc->pmu, mid_ack);
int loops; int loops;
u64 status; u64 status;
int handled; int handled;
int pmu_enabled; int pmu_enabled;
cpuc = this_cpu_ptr(&cpu_hw_events);
/* /*
* Save the PMU state. * Save the PMU state.
* It needs to be restored when leaving the handler. * It needs to be restored when leaving the handler.
*/ */
pmu_enabled = cpuc->enabled; pmu_enabled = cpuc->enabled;
/* /*
* No known reason to not always do late ACK, * In general, the early ACK is only applied for old platforms.
* but just in case do it opt-in. * For the big core starts from Haswell, the late ACK should be
* applied.
* For the small core after Tremont, we have to do the ACK right
* before re-enabling counters, which is in the middle of the
* NMI handler.
*/ */
if (!x86_pmu.late_ack) if (!late_ack && !mid_ack)
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
intel_bts_disable_local(); intel_bts_disable_local();
cpuc->enabled = 0; cpuc->enabled = 0;
@ -2958,6 +2962,8 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
goto again; goto again;
done: done:
if (mid_ack)
apic_write(APIC_LVTPC, APIC_DM_NMI);
/* Only restore PMU state when it's active. See x86_pmu_disable(). */ /* Only restore PMU state when it's active. See x86_pmu_disable(). */
cpuc->enabled = pmu_enabled; cpuc->enabled = pmu_enabled;
if (pmu_enabled) if (pmu_enabled)
@ -2969,7 +2975,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs)
* have been reset. This avoids spurious NMIs on * have been reset. This avoids spurious NMIs on
* Haswell CPUs. * Haswell CPUs.
*/ */
if (x86_pmu.late_ack) if (late_ack)
apic_write(APIC_LVTPC, APIC_DM_NMI); apic_write(APIC_LVTPC, APIC_DM_NMI);
return handled; return handled;
} }
@ -6129,7 +6135,6 @@ __init int intel_pmu_init(void)
static_branch_enable(&perf_is_hybrid); static_branch_enable(&perf_is_hybrid);
x86_pmu.num_hybrid_pmus = X86_HYBRID_NUM_PMUS; x86_pmu.num_hybrid_pmus = X86_HYBRID_NUM_PMUS;
x86_pmu.late_ack = true;
x86_pmu.pebs_aliases = NULL; x86_pmu.pebs_aliases = NULL;
x86_pmu.pebs_prec_dist = true; x86_pmu.pebs_prec_dist = true;
x86_pmu.pebs_block = true; x86_pmu.pebs_block = true;
@ -6167,6 +6172,7 @@ __init int intel_pmu_init(void)
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX]; pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
pmu->name = "cpu_core"; pmu->name = "cpu_core";
pmu->cpu_type = hybrid_big; pmu->cpu_type = hybrid_big;
pmu->late_ack = true;
if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) { if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {
pmu->num_counters = x86_pmu.num_counters + 2; pmu->num_counters = x86_pmu.num_counters + 2;
pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1; pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1;
@ -6192,6 +6198,7 @@ __init int intel_pmu_init(void)
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX]; pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
pmu->name = "cpu_atom"; pmu->name = "cpu_atom";
pmu->cpu_type = hybrid_small; pmu->cpu_type = hybrid_small;
pmu->mid_ack = true;
pmu->num_counters = x86_pmu.num_counters; pmu->num_counters = x86_pmu.num_counters;
pmu->num_counters_fixed = x86_pmu.num_counters_fixed; pmu->num_counters_fixed = x86_pmu.num_counters_fixed;
pmu->max_pebs_events = x86_pmu.max_pebs_events; pmu->max_pebs_events = x86_pmu.max_pebs_events;

View File

@ -656,6 +656,10 @@ struct x86_hybrid_pmu {
struct event_constraint *event_constraints; struct event_constraint *event_constraints;
struct event_constraint *pebs_constraints; struct event_constraint *pebs_constraints;
struct extra_reg *extra_regs; struct extra_reg *extra_regs;
unsigned int late_ack :1,
mid_ack :1,
enabled_ack :1;
}; };
static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu) static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
@ -686,6 +690,16 @@ extern struct static_key_false perf_is_hybrid;
__Fp; \ __Fp; \
})) }))
#define hybrid_bit(_pmu, _field) \
({ \
bool __Fp = x86_pmu._field; \
\
if (is_hybrid() && (_pmu)) \
__Fp = hybrid_pmu(_pmu)->_field; \
\
__Fp; \
})
enum hybrid_pmu_type { enum hybrid_pmu_type {
hybrid_big = 0x40, hybrid_big = 0x40,
hybrid_small = 0x20, hybrid_small = 0x20,
@ -755,6 +769,7 @@ struct x86_pmu {
/* PMI handler bits */ /* PMI handler bits */
unsigned int late_ack :1, unsigned int late_ack :1,
mid_ack :1,
enabled_ack :1; enabled_ack :1;
/* /*
* sysfs attrs * sysfs attrs
@ -1115,9 +1130,10 @@ void x86_pmu_stop(struct perf_event *event, int flags);
static inline void x86_pmu_disable_event(struct perf_event *event) static inline void x86_pmu_disable_event(struct perf_event *event)
{ {
u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask);
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
wrmsrl(hwc->config_base, hwc->config); wrmsrl(hwc->config_base, hwc->config & ~disable_mask);
if (is_counter_pair(hwc)) if (is_counter_pair(hwc))
wrmsrl(x86_pmu_config_addr(hwc->idx + 1), 0); wrmsrl(x86_pmu_config_addr(hwc->idx + 1), 0);

View File

@ -57,12 +57,12 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = {
[S_REL] = [S_REL] =
"^(__init_(begin|end)|" "^(__init_(begin|end)|"
"__x86_cpu_dev_(start|end)|" "__x86_cpu_dev_(start|end)|"
"(__parainstructions|__alt_instructions)(|_end)|" "(__parainstructions|__alt_instructions)(_end)?|"
"(__iommu_table|__apicdrivers|__smp_locks)(|_end)|" "(__iommu_table|__apicdrivers|__smp_locks)(_end)?|"
"__(start|end)_pci_.*|" "__(start|end)_pci_.*|"
"__(start|end)_builtin_fw|" "__(start|end)_builtin_fw|"
"__(start|stop)___ksymtab(|_gpl)|" "__(start|stop)___ksymtab(_gpl)?|"
"__(start|stop)___kcrctab(|_gpl)|" "__(start|stop)___kcrctab(_gpl)?|"
"__(start|stop)___param|" "__(start|stop)___param|"
"__(start|stop)___modver|" "__(start|stop)___modver|"
"__(start|stop)___bug_table|" "__(start|stop)___bug_table|"

View File

@ -790,6 +790,7 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
struct blkcg_gq *parent = blkg->parent; struct blkcg_gq *parent = blkg->parent;
struct blkg_iostat_set *bisc = per_cpu_ptr(blkg->iostat_cpu, cpu); struct blkg_iostat_set *bisc = per_cpu_ptr(blkg->iostat_cpu, cpu);
struct blkg_iostat cur, delta; struct blkg_iostat cur, delta;
unsigned long flags;
unsigned int seq; unsigned int seq;
/* fetch the current per-cpu values */ /* fetch the current per-cpu values */
@ -799,21 +800,21 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
} while (u64_stats_fetch_retry(&bisc->sync, seq)); } while (u64_stats_fetch_retry(&bisc->sync, seq));
/* propagate percpu delta to global */ /* propagate percpu delta to global */
u64_stats_update_begin(&blkg->iostat.sync); flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
blkg_iostat_set(&delta, &cur); blkg_iostat_set(&delta, &cur);
blkg_iostat_sub(&delta, &bisc->last); blkg_iostat_sub(&delta, &bisc->last);
blkg_iostat_add(&blkg->iostat.cur, &delta); blkg_iostat_add(&blkg->iostat.cur, &delta);
blkg_iostat_add(&bisc->last, &delta); blkg_iostat_add(&bisc->last, &delta);
u64_stats_update_end(&blkg->iostat.sync); u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
/* propagate global delta to parent (unless that's root) */ /* propagate global delta to parent (unless that's root) */
if (parent && parent->parent) { if (parent && parent->parent) {
u64_stats_update_begin(&parent->iostat.sync); flags = u64_stats_update_begin_irqsave(&parent->iostat.sync);
blkg_iostat_set(&delta, &blkg->iostat.cur); blkg_iostat_set(&delta, &blkg->iostat.cur);
blkg_iostat_sub(&delta, &blkg->iostat.last); blkg_iostat_sub(&delta, &blkg->iostat.last);
blkg_iostat_add(&parent->iostat.cur, &delta); blkg_iostat_add(&parent->iostat.cur, &delta);
blkg_iostat_add(&blkg->iostat.last, &delta); blkg_iostat_add(&blkg->iostat.last, &delta);
u64_stats_update_end(&parent->iostat.sync); u64_stats_update_end_irqrestore(&parent->iostat.sync, flags);
} }
} }
@ -848,6 +849,7 @@ static void blkcg_fill_root_iostats(void)
memset(&tmp, 0, sizeof(tmp)); memset(&tmp, 0, sizeof(tmp));
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
struct disk_stats *cpu_dkstats; struct disk_stats *cpu_dkstats;
unsigned long flags;
cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu); cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu);
tmp.ios[BLKG_IOSTAT_READ] += tmp.ios[BLKG_IOSTAT_READ] +=
@ -864,9 +866,9 @@ static void blkcg_fill_root_iostats(void)
tmp.bytes[BLKG_IOSTAT_DISCARD] += tmp.bytes[BLKG_IOSTAT_DISCARD] +=
cpu_dkstats->sectors[STAT_DISCARD] << 9; cpu_dkstats->sectors[STAT_DISCARD] << 9;
u64_stats_update_begin(&blkg->iostat.sync); flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync);
blkg_iostat_set(&blkg->iostat.cur, &tmp); blkg_iostat_set(&blkg->iostat.cur, &tmp);
u64_stats_update_end(&blkg->iostat.sync); u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
} }
} }
} }

View File

@ -833,7 +833,11 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf,
enable = iolatency_set_min_lat_nsec(blkg, lat_val); enable = iolatency_set_min_lat_nsec(blkg, lat_val);
if (enable) { if (enable) {
WARN_ON_ONCE(!blk_get_queue(blkg->q)); if (!blk_get_queue(blkg->q)) {
ret = -ENODEV;
goto out;
}
blkg_get(blkg); blkg_get(blkg);
} }

View File

@ -596,13 +596,13 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx,
struct list_head *head = &kcq->rq_list[sched_domain]; struct list_head *head = &kcq->rq_list[sched_domain];
spin_lock(&kcq->lock); spin_lock(&kcq->lock);
trace_block_rq_insert(rq);
if (at_head) if (at_head)
list_move(&rq->queuelist, head); list_move(&rq->queuelist, head);
else else
list_move_tail(&rq->queuelist, head); list_move_tail(&rq->queuelist, head);
sbitmap_set_bit(&khd->kcq_map[sched_domain], sbitmap_set_bit(&khd->kcq_map[sched_domain],
rq->mq_ctx->index_hw[hctx->type]); rq->mq_ctx->index_hw[hctx->type]);
trace_block_rq_insert(rq);
spin_unlock(&kcq->lock); spin_unlock(&kcq->lock);
} }
} }

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later // SPDX-License-Identifier: GPL-2.0-or-later
/** /*
* ldm - Support for Windows Logical Disk Manager (Dynamic Disks) * ldm - Support for Windows Logical Disk Manager (Dynamic Disks)
* *
* Copyright (C) 2001,2002 Richard Russon <ldm@flatcap.org> * Copyright (C) 2001,2002 Richard Russon <ldm@flatcap.org>

View File

@ -379,13 +379,6 @@ acpi_ns_repair_CID(struct acpi_evaluate_info *info,
(*element_ptr)->common.reference_count = (*element_ptr)->common.reference_count =
original_ref_count; original_ref_count;
/*
* The original_element holds a reference from the package object
* that represents _HID. Since a new element was created by _HID,
* remove the reference from the _CID package.
*/
acpi_ut_remove_reference(original_element);
} }
element_ptr++; element_ptr++;

View File

@ -653,8 +653,6 @@ static int really_probe(struct device *dev, struct device_driver *drv)
else if (drv->remove) else if (drv->remove)
drv->remove(dev); drv->remove(dev);
probe_failed: probe_failed:
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;
if (dev->bus) if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier, blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_DRIVER_NOT_BOUND, dev); BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
@ -662,6 +660,8 @@ static int really_probe(struct device *dev, struct device_driver *drv)
device_links_no_driver(dev); device_links_no_driver(dev);
devres_release_all(dev); devres_release_all(dev);
arch_teardown_dma_ops(dev); arch_teardown_dma_ops(dev);
kfree(dev->dma_range_map);
dev->dma_range_map = NULL;
driver_sysfs_remove(dev); driver_sysfs_remove(dev);
dev->driver = NULL; dev->driver = NULL;
dev_set_drvdata(dev, NULL); dev_set_drvdata(dev, NULL);

View File

@ -89,12 +89,11 @@ static void __fw_load_abort(struct fw_priv *fw_priv)
{ {
/* /*
* There is a small window in which user can write to 'loading' * There is a small window in which user can write to 'loading'
* between loading done and disappearance of 'loading' * between loading done/aborted and disappearance of 'loading'
*/ */
if (fw_sysfs_done(fw_priv)) if (fw_state_is_aborted(fw_priv) || fw_sysfs_done(fw_priv))
return; return;
list_del_init(&fw_priv->pending_list);
fw_state_aborted(fw_priv); fw_state_aborted(fw_priv);
} }
@ -280,7 +279,6 @@ static ssize_t firmware_loading_store(struct device *dev,
* Same logic as fw_load_abort, only the DONE bit * Same logic as fw_load_abort, only the DONE bit
* is ignored and we set ABORT only on failure. * is ignored and we set ABORT only on failure.
*/ */
list_del_init(&fw_priv->pending_list);
if (rc) { if (rc) {
fw_state_aborted(fw_priv); fw_state_aborted(fw_priv);
written = rc; written = rc;
@ -513,6 +511,11 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
} }
mutex_lock(&fw_lock); mutex_lock(&fw_lock);
if (fw_state_is_aborted(fw_priv)) {
mutex_unlock(&fw_lock);
retval = -EINTR;
goto out;
}
list_add(&fw_priv->pending_list, &pending_fw_head); list_add(&fw_priv->pending_list, &pending_fw_head);
mutex_unlock(&fw_lock); mutex_unlock(&fw_lock);
@ -535,11 +538,10 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs, long timeout)
if (fw_state_is_aborted(fw_priv)) { if (fw_state_is_aborted(fw_priv)) {
if (retval == -ERESTARTSYS) if (retval == -ERESTARTSYS)
retval = -EINTR; retval = -EINTR;
else
retval = -EAGAIN;
} else if (fw_priv->is_paged_buf && !fw_priv->data) } else if (fw_priv->is_paged_buf && !fw_priv->data)
retval = -ENOMEM; retval = -ENOMEM;
out:
device_del(f_dev); device_del(f_dev);
err_put_dev: err_put_dev:
put_device(f_dev); put_device(f_dev);

View File

@ -117,8 +117,16 @@ static inline void __fw_state_set(struct fw_priv *fw_priv,
WRITE_ONCE(fw_st->status, status); WRITE_ONCE(fw_st->status, status);
if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) {
#ifdef CONFIG_FW_LOADER_USER_HELPER
/*
* Doing this here ensures that the fw_priv is deleted from
* the pending list in all abort/done paths.
*/
list_del_init(&fw_priv->pending_list);
#endif
complete_all(&fw_st->completion); complete_all(&fw_st->completion);
}
} }
static inline void fw_state_aborted(struct fw_priv *fw_priv) static inline void fw_state_aborted(struct fw_priv *fw_priv)

View File

@ -783,8 +783,10 @@ static void fw_abort_batch_reqs(struct firmware *fw)
return; return;
fw_priv = fw->priv; fw_priv = fw->priv;
mutex_lock(&fw_lock);
if (!fw_state_is_aborted(fw_priv)) if (!fw_state_is_aborted(fw_priv))
fw_state_aborted(fw_priv); fw_state_aborted(fw_priv);
mutex_unlock(&fw_lock);
} }
/* called from request_firmware() and request_firmware_work_func() */ /* called from request_firmware() and request_firmware_work_func() */

View File

@ -74,7 +74,7 @@ static bool n64cart_do_bvec(struct device *dev, struct bio_vec *bv, u32 pos)
n64cart_wait_dma(); n64cart_wait_dma();
n64cart_write_reg(PI_DRAM_REG, dma_addr + bv->bv_offset); n64cart_write_reg(PI_DRAM_REG, dma_addr);
n64cart_write_reg(PI_CART_REG, (bstart | CART_DOMAIN) & CART_MAX); n64cart_write_reg(PI_CART_REG, (bstart | CART_DOMAIN) & CART_MAX);
n64cart_write_reg(PI_WRITE_REG, bv->bv_len - 1); n64cart_write_reg(PI_WRITE_REG, bv->bv_len - 1);

View File

@ -100,6 +100,7 @@ static const char * const clock_names[SYSC_MAX_CLOCKS] = {
* @cookie: data used by legacy platform callbacks * @cookie: data used by legacy platform callbacks
* @name: name if available * @name: name if available
* @revision: interconnect target module revision * @revision: interconnect target module revision
* @reserved: target module is reserved and already in use
* @enabled: sysc runtime enabled status * @enabled: sysc runtime enabled status
* @needs_resume: runtime resume needed on resume from suspend * @needs_resume: runtime resume needed on resume from suspend
* @child_needs_resume: runtime resume needed for child on resume from suspend * @child_needs_resume: runtime resume needed for child on resume from suspend
@ -130,6 +131,7 @@ struct sysc {
struct ti_sysc_cookie cookie; struct ti_sysc_cookie cookie;
const char *name; const char *name;
u32 revision; u32 revision;
unsigned int reserved:1;
unsigned int enabled:1; unsigned int enabled:1;
unsigned int needs_resume:1; unsigned int needs_resume:1;
unsigned int child_needs_resume:1; unsigned int child_needs_resume:1;
@ -2951,6 +2953,8 @@ static int sysc_init_soc(struct sysc *ddata)
case SOC_3430 ... SOC_3630: case SOC_3430 ... SOC_3630:
sysc_add_disabled(0x48304000); /* timer12 */ sysc_add_disabled(0x48304000); /* timer12 */
break; break;
case SOC_AM3:
sysc_add_disabled(0x48310000); /* rng */
default: default:
break; break;
} }
@ -3093,8 +3097,8 @@ static int sysc_probe(struct platform_device *pdev)
return error; return error;
error = sysc_check_active_timer(ddata); error = sysc_check_active_timer(ddata);
if (error) if (error == -EBUSY)
return error; ddata->reserved = true;
error = sysc_get_clocks(ddata); error = sysc_get_clocks(ddata);
if (error) if (error)
@ -3130,11 +3134,15 @@ static int sysc_probe(struct platform_device *pdev)
sysc_show_registers(ddata); sysc_show_registers(ddata);
ddata->dev->type = &sysc_device_type; ddata->dev->type = &sysc_device_type;
error = of_platform_populate(ddata->dev->of_node, sysc_match_table,
pdata ? pdata->auxdata : NULL, if (!ddata->reserved) {
ddata->dev); error = of_platform_populate(ddata->dev->of_node,
if (error) sysc_match_table,
goto err; pdata ? pdata->auxdata : NULL,
ddata->dev);
if (error)
goto err;
}
INIT_DELAYED_WORK(&ddata->idle_work, ti_sysc_idle); INIT_DELAYED_WORK(&ddata->idle_work, ti_sysc_idle);

View File

@ -254,11 +254,11 @@ static int ftpm_tee_probe(struct device *dev)
pvt_data->session = sess_arg.session; pvt_data->session = sess_arg.session;
/* Allocate dynamic shared memory with fTPM TA */ /* Allocate dynamic shared memory with fTPM TA */
pvt_data->shm = tee_shm_alloc(pvt_data->ctx, pvt_data->shm = tee_shm_alloc_kernel_buf(pvt_data->ctx,
MAX_COMMAND_SIZE + MAX_RESPONSE_SIZE, MAX_COMMAND_SIZE +
TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); MAX_RESPONSE_SIZE);
if (IS_ERR(pvt_data->shm)) { if (IS_ERR(pvt_data->shm)) {
dev_err(dev, "%s: tee_shm_alloc failed\n", __func__); dev_err(dev, "%s: tee_shm_alloc_kernel_buf failed\n", __func__);
rc = -ENOMEM; rc = -ENOMEM;
goto out_shm_alloc; goto out_shm_alloc;
} }

View File

@ -382,8 +382,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
alt_intercepts = 2 * idx_intercept_sum > cpu_data->total - idx_hit_sum; alt_intercepts = 2 * idx_intercept_sum > cpu_data->total - idx_hit_sum;
alt_recent = idx_recent_sum > NR_RECENT / 2; alt_recent = idx_recent_sum > NR_RECENT / 2;
if (alt_recent || alt_intercepts) { if (alt_recent || alt_intercepts) {
s64 last_enabled_span_ns = duration_ns; s64 first_suitable_span_ns = duration_ns;
int last_enabled_idx = idx; int first_suitable_idx = idx;
/* /*
* Look for the deepest idle state whose target residency had * Look for the deepest idle state whose target residency had
@ -397,37 +397,51 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
intercept_sum = 0; intercept_sum = 0;
recent_sum = 0; recent_sum = 0;
for (i = idx - 1; i >= idx0; i--) { for (i = idx - 1; i >= 0; i--) {
struct teo_bin *bin = &cpu_data->state_bins[i]; struct teo_bin *bin = &cpu_data->state_bins[i];
s64 span_ns; s64 span_ns;
intercept_sum += bin->intercepts; intercept_sum += bin->intercepts;
recent_sum += bin->recent; recent_sum += bin->recent;
if (dev->states_usage[i].disable)
continue;
span_ns = teo_middle_of_bin(i, drv); span_ns = teo_middle_of_bin(i, drv);
if (!teo_time_ok(span_ns)) {
/*
* The current state is too shallow, so select
* the first enabled deeper state.
*/
duration_ns = last_enabled_span_ns;
idx = last_enabled_idx;
break;
}
if ((!alt_recent || 2 * recent_sum > idx_recent_sum) && if ((!alt_recent || 2 * recent_sum > idx_recent_sum) &&
(!alt_intercepts || (!alt_intercepts ||
2 * intercept_sum > idx_intercept_sum)) { 2 * intercept_sum > idx_intercept_sum)) {
idx = i; if (teo_time_ok(span_ns) &&
duration_ns = span_ns; !dev->states_usage[i].disable) {
idx = i;
duration_ns = span_ns;
} else {
/*
* The current state is too shallow or
* disabled, so take the first enabled
* deeper state with suitable time span.
*/
idx = first_suitable_idx;
duration_ns = first_suitable_span_ns;
}
break; break;
} }
last_enabled_span_ns = span_ns; if (dev->states_usage[i].disable)
last_enabled_idx = i; continue;
if (!teo_time_ok(span_ns)) {
/*
* The current state is too shallow, but if an
* alternative candidate state has been found,
* it may still turn out to be a better choice.
*/
if (first_suitable_idx != idx)
continue;
break;
}
first_suitable_span_ns = span_ns;
first_suitable_idx = i;
} }
} }

View File

@ -294,6 +294,14 @@ struct idxd_desc {
struct idxd_wq *wq; struct idxd_wq *wq;
}; };
/*
* This is software defined error for the completion status. We overload the error code
* that will never appear in completion status and only SWERR register.
*/
enum idxd_completion_status {
IDXD_COMP_DESC_ABORT = 0xff,
};
#define confdev_to_idxd(dev) container_of(dev, struct idxd_device, conf_dev) #define confdev_to_idxd(dev) container_of(dev, struct idxd_device, conf_dev)
#define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev) #define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev)
@ -482,4 +490,10 @@ static inline void perfmon_init(void) {}
static inline void perfmon_exit(void) {} static inline void perfmon_exit(void) {}
#endif #endif
static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason)
{
idxd_dma_complete_txd(desc, reason);
idxd_free_desc(desc->wq, desc);
}
#endif #endif

View File

@ -102,6 +102,8 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
spin_lock_init(&idxd->irq_entries[i].list_lock); spin_lock_init(&idxd->irq_entries[i].list_lock);
} }
idxd_msix_perm_setup(idxd);
irq_entry = &idxd->irq_entries[0]; irq_entry = &idxd->irq_entries[0];
rc = request_threaded_irq(irq_entry->vector, NULL, idxd_misc_thread, rc = request_threaded_irq(irq_entry->vector, NULL, idxd_misc_thread,
0, "idxd-misc", irq_entry); 0, "idxd-misc", irq_entry);
@ -148,7 +150,6 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
} }
idxd_unmask_error_interrupts(idxd); idxd_unmask_error_interrupts(idxd);
idxd_msix_perm_setup(idxd);
return 0; return 0;
err_wq_irqs: err_wq_irqs:
@ -162,6 +163,7 @@ static int idxd_setup_interrupts(struct idxd_device *idxd)
err_misc_irq: err_misc_irq:
/* Disable error interrupt generation */ /* Disable error interrupt generation */
idxd_mask_error_interrupts(idxd); idxd_mask_error_interrupts(idxd);
idxd_msix_perm_clear(idxd);
err_irq_entries: err_irq_entries:
pci_free_irq_vectors(pdev); pci_free_irq_vectors(pdev);
dev_err(dev, "No usable interrupts\n"); dev_err(dev, "No usable interrupts\n");
@ -758,32 +760,40 @@ static void idxd_shutdown(struct pci_dev *pdev)
for (i = 0; i < msixcnt; i++) { for (i = 0; i < msixcnt; i++) {
irq_entry = &idxd->irq_entries[i]; irq_entry = &idxd->irq_entries[i];
synchronize_irq(irq_entry->vector); synchronize_irq(irq_entry->vector);
free_irq(irq_entry->vector, irq_entry);
if (i == 0) if (i == 0)
continue; continue;
idxd_flush_pending_llist(irq_entry); idxd_flush_pending_llist(irq_entry);
idxd_flush_work_list(irq_entry); idxd_flush_work_list(irq_entry);
} }
flush_workqueue(idxd->wq);
idxd_msix_perm_clear(idxd);
idxd_release_int_handles(idxd);
pci_free_irq_vectors(pdev);
pci_iounmap(pdev, idxd->reg_base);
pci_disable_device(pdev);
destroy_workqueue(idxd->wq);
} }
static void idxd_remove(struct pci_dev *pdev) static void idxd_remove(struct pci_dev *pdev)
{ {
struct idxd_device *idxd = pci_get_drvdata(pdev); struct idxd_device *idxd = pci_get_drvdata(pdev);
struct idxd_irq_entry *irq_entry;
int msixcnt = pci_msix_vec_count(pdev);
int i;
dev_dbg(&pdev->dev, "%s called\n", __func__); dev_dbg(&pdev->dev, "%s called\n", __func__);
idxd_shutdown(pdev); idxd_shutdown(pdev);
if (device_pasid_enabled(idxd)) if (device_pasid_enabled(idxd))
idxd_disable_system_pasid(idxd); idxd_disable_system_pasid(idxd);
idxd_unregister_devices(idxd); idxd_unregister_devices(idxd);
perfmon_pmu_remove(idxd);
for (i = 0; i < msixcnt; i++) {
irq_entry = &idxd->irq_entries[i];
free_irq(irq_entry->vector, irq_entry);
}
idxd_msix_perm_clear(idxd);
idxd_release_int_handles(idxd);
pci_free_irq_vectors(pdev);
pci_iounmap(pdev, idxd->reg_base);
iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA); iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA);
pci_disable_device(pdev);
destroy_workqueue(idxd->wq);
perfmon_pmu_remove(idxd);
device_unregister(&idxd->conf_dev);
} }
static struct pci_driver idxd_pci_driver = { static struct pci_driver idxd_pci_driver = {

View File

@ -245,12 +245,6 @@ static inline bool match_fault(struct idxd_desc *desc, u64 fault_addr)
return false; return false;
} }
static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason)
{
idxd_dma_complete_txd(desc, reason);
idxd_free_desc(desc->wq, desc);
}
static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry, static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
enum irq_work_type wtype, enum irq_work_type wtype,
int *processed, u64 data) int *processed, u64 data)
@ -272,8 +266,16 @@ static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
reason = IDXD_COMPLETE_DEV_FAIL; reason = IDXD_COMPLETE_DEV_FAIL;
llist_for_each_entry_safe(desc, t, head, llnode) { llist_for_each_entry_safe(desc, t, head, llnode) {
if (desc->completion->status) { u8 status = desc->completion->status & DSA_COMP_STATUS_MASK;
if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS)
if (status) {
if (unlikely(status == IDXD_COMP_DESC_ABORT)) {
complete_desc(desc, IDXD_COMPLETE_ABORT);
(*processed)++;
continue;
}
if (unlikely(status != DSA_COMP_SUCCESS))
match_fault(desc, data); match_fault(desc, data);
complete_desc(desc, reason); complete_desc(desc, reason);
(*processed)++; (*processed)++;
@ -329,7 +331,14 @@ static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
spin_unlock_irqrestore(&irq_entry->list_lock, flags); spin_unlock_irqrestore(&irq_entry->list_lock, flags);
list_for_each_entry(desc, &flist, list) { list_for_each_entry(desc, &flist, list) {
if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS) u8 status = desc->completion->status & DSA_COMP_STATUS_MASK;
if (unlikely(status == IDXD_COMP_DESC_ABORT)) {
complete_desc(desc, IDXD_COMPLETE_ABORT);
continue;
}
if (unlikely(status != DSA_COMP_SUCCESS))
match_fault(desc, data); match_fault(desc, data);
complete_desc(desc, reason); complete_desc(desc, reason);
} }

View File

@ -25,11 +25,10 @@ static struct idxd_desc *__get_desc(struct idxd_wq *wq, int idx, int cpu)
* Descriptor completion vectors are 1...N for MSIX. We will round * Descriptor completion vectors are 1...N for MSIX. We will round
* robin through the N vectors. * robin through the N vectors.
*/ */
wq->vec_ptr = (wq->vec_ptr % idxd->num_wq_irqs) + 1; wq->vec_ptr = desc->vector = (wq->vec_ptr % idxd->num_wq_irqs) + 1;
if (!idxd->int_handles) { if (!idxd->int_handles) {
desc->hw->int_handle = wq->vec_ptr; desc->hw->int_handle = wq->vec_ptr;
} else { } else {
desc->vector = wq->vec_ptr;
/* /*
* int_handles are only for descriptor completion. However for device * int_handles are only for descriptor completion. However for device
* MSIX enumeration, vec 0 is used for misc interrupts. Therefore even * MSIX enumeration, vec 0 is used for misc interrupts. Therefore even
@ -88,9 +87,64 @@ void idxd_free_desc(struct idxd_wq *wq, struct idxd_desc *desc)
sbitmap_queue_clear(&wq->sbq, desc->id, cpu); sbitmap_queue_clear(&wq->sbq, desc->id, cpu);
} }
static struct idxd_desc *list_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
struct idxd_desc *desc)
{
struct idxd_desc *d, *n;
lockdep_assert_held(&ie->list_lock);
list_for_each_entry_safe(d, n, &ie->work_list, list) {
if (d == desc) {
list_del(&d->list);
return d;
}
}
/*
* At this point, the desc needs to be aborted is held by the completion
* handler where it has taken it off the pending list but has not added to the
* work list. It will be cleaned up by the interrupt handler when it sees the
* IDXD_COMP_DESC_ABORT for completion status.
*/
return NULL;
}
static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie,
struct idxd_desc *desc)
{
struct idxd_desc *d, *t, *found = NULL;
struct llist_node *head;
unsigned long flags;
desc->completion->status = IDXD_COMP_DESC_ABORT;
/*
* Grab the list lock so it will block the irq thread handler. This allows the
* abort code to locate the descriptor need to be aborted.
*/
spin_lock_irqsave(&ie->list_lock, flags);
head = llist_del_all(&ie->pending_llist);
if (head) {
llist_for_each_entry_safe(d, t, head, llnode) {
if (d == desc) {
found = desc;
continue;
}
list_add_tail(&desc->list, &ie->work_list);
}
}
if (!found)
found = list_abort_desc(wq, ie, desc);
spin_unlock_irqrestore(&ie->list_lock, flags);
if (found)
complete_desc(found, IDXD_COMPLETE_ABORT);
}
int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc) int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
{ {
struct idxd_device *idxd = wq->idxd; struct idxd_device *idxd = wq->idxd;
struct idxd_irq_entry *ie = NULL;
void __iomem *portal; void __iomem *portal;
int rc; int rc;
@ -108,6 +162,16 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
* even on UP because the recipient is a device. * even on UP because the recipient is a device.
*/ */
wmb(); wmb();
/*
* Pending the descriptor to the lockless list for the irq_entry
* that we designated the descriptor to.
*/
if (desc->hw->flags & IDXD_OP_FLAG_RCI) {
ie = &idxd->irq_entries[desc->vector];
llist_add(&desc->llnode, &ie->pending_llist);
}
if (wq_dedicated(wq)) { if (wq_dedicated(wq)) {
iosubmit_cmds512(portal, desc->hw, 1); iosubmit_cmds512(portal, desc->hw, 1);
} else { } else {
@ -118,29 +182,13 @@ int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
* device is not accepting descriptor at all. * device is not accepting descriptor at all.
*/ */
rc = enqcmds(portal, desc->hw); rc = enqcmds(portal, desc->hw);
if (rc < 0) if (rc < 0) {
if (ie)
llist_abort_desc(wq, ie, desc);
return rc; return rc;
}
} }
percpu_ref_put(&wq->wq_active); percpu_ref_put(&wq->wq_active);
/*
* Pending the descriptor to the lockless list for the irq_entry
* that we designated the descriptor to.
*/
if (desc->hw->flags & IDXD_OP_FLAG_RCI) {
int vec;
/*
* If the driver is on host kernel, it would be the value
* assigned to interrupt handle, which is index for MSIX
* vector. If it's guest then can't use the int_handle since
* that is the index to IMS for the entire device. The guest
* device local index will be used.
*/
vec = !idxd->int_handles ? desc->hw->int_handle : desc->vector;
llist_add(&desc->llnode, &idxd->irq_entries[vec].pending_llist);
}
return 0; return 0;
} }

View File

@ -1744,8 +1744,6 @@ void idxd_unregister_devices(struct idxd_device *idxd)
device_unregister(&group->conf_dev); device_unregister(&group->conf_dev);
} }
device_unregister(&idxd->conf_dev);
} }
int idxd_register_bus_type(void) int idxd_register_bus_type(void)

View File

@ -812,6 +812,8 @@ static struct dma_async_tx_descriptor *imxdma_prep_slave_sg(
dma_length += sg_dma_len(sg); dma_length += sg_dma_len(sg);
} }
imxdma_config_write(chan, &imxdmac->config, direction);
switch (imxdmac->word_size) { switch (imxdmac->word_size) {
case DMA_SLAVE_BUSWIDTH_4_BYTES: case DMA_SLAVE_BUSWIDTH_4_BYTES:
if (sg_dma_len(sgl) & 3 || sgl->dma_address & 3) if (sg_dma_len(sgl) & 3 || sgl->dma_address & 3)

View File

@ -67,8 +67,12 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
return NULL; return NULL;
ofdma_target = of_dma_find_controller(&dma_spec_target); ofdma_target = of_dma_find_controller(&dma_spec_target);
if (!ofdma_target) if (!ofdma_target) {
return NULL; ofdma->dma_router->route_free(ofdma->dma_router->dev,
route_data);
chan = ERR_PTR(-EPROBE_DEFER);
goto err;
}
chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target); chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target);
if (IS_ERR_OR_NULL(chan)) { if (IS_ERR_OR_NULL(chan)) {
@ -89,6 +93,7 @@ static struct dma_chan *of_dma_router_xlate(struct of_phandle_args *dma_spec,
} }
} }
err:
/* /*
* Need to put the node back since the ofdma->of_dma_route_allocate * Need to put the node back since the ofdma->of_dma_route_allocate
* has taken it for generating the new, translated dma_spec * has taken it for generating the new, translated dma_spec

View File

@ -855,8 +855,8 @@ static int usb_dmac_probe(struct platform_device *pdev)
error: error:
of_dma_controller_free(pdev->dev.of_node); of_dma_controller_free(pdev->dev.of_node);
pm_runtime_put(&pdev->dev);
error_pm: error_pm:
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
return ret; return ret;
} }

View File

@ -1200,7 +1200,7 @@ static int stm32_dma_alloc_chan_resources(struct dma_chan *c)
chan->config_init = false; chan->config_init = false;
ret = pm_runtime_get_sync(dmadev->ddev.dev); ret = pm_runtime_resume_and_get(dmadev->ddev.dev);
if (ret < 0) if (ret < 0)
return ret; return ret;
@ -1470,7 +1470,7 @@ static int stm32_dma_suspend(struct device *dev)
struct stm32_dma_device *dmadev = dev_get_drvdata(dev); struct stm32_dma_device *dmadev = dev_get_drvdata(dev);
int id, ret, scr; int id, ret, scr;
ret = pm_runtime_get_sync(dev); ret = pm_runtime_resume_and_get(dev);
if (ret < 0) if (ret < 0)
return ret; return ret;

View File

@ -137,7 +137,7 @@ static void *stm32_dmamux_route_allocate(struct of_phandle_args *dma_spec,
/* Set dma request */ /* Set dma request */
spin_lock_irqsave(&dmamux->lock, flags); spin_lock_irqsave(&dmamux->lock, flags);
ret = pm_runtime_get_sync(&pdev->dev); ret = pm_runtime_resume_and_get(&pdev->dev);
if (ret < 0) { if (ret < 0) {
spin_unlock_irqrestore(&dmamux->lock, flags); spin_unlock_irqrestore(&dmamux->lock, flags);
goto error; goto error;
@ -336,7 +336,7 @@ static int stm32_dmamux_suspend(struct device *dev)
struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev); struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev);
int i, ret; int i, ret;
ret = pm_runtime_get_sync(dev); ret = pm_runtime_resume_and_get(dev);
if (ret < 0) if (ret < 0)
return ret; return ret;
@ -361,7 +361,7 @@ static int stm32_dmamux_resume(struct device *dev)
if (ret < 0) if (ret < 0)
return ret; return ret;
ret = pm_runtime_get_sync(dev); ret = pm_runtime_resume_and_get(dev);
if (ret < 0) if (ret < 0)
return ret; return ret;

View File

@ -209,8 +209,8 @@ static int uniphier_xdmac_chan_stop(struct uniphier_xdmac_chan *xc)
writel(0, xc->reg_ch_base + XDMAC_TSS); writel(0, xc->reg_ch_base + XDMAC_TSS);
/* wait until transfer is stopped */ /* wait until transfer is stopped */
return readl_poll_timeout(xc->reg_ch_base + XDMAC_STAT, val, return readl_poll_timeout_atomic(xc->reg_ch_base + XDMAC_STAT, val,
!(val & XDMAC_STAT_TENF), 100, 1000); !(val & XDMAC_STAT_TENF), 100, 1000);
} }
/* xc->vc.lock must be held by caller */ /* xc->vc.lock must be held by caller */

View File

@ -394,6 +394,7 @@ struct xilinx_dma_tx_descriptor {
* @genlock: Support genlock mode * @genlock: Support genlock mode
* @err: Channel has errors * @err: Channel has errors
* @idle: Check for channel idle * @idle: Check for channel idle
* @terminating: Check for channel being synchronized by user
* @tasklet: Cleanup work after irq * @tasklet: Cleanup work after irq
* @config: Device configuration info * @config: Device configuration info
* @flush_on_fsync: Flush on Frame sync * @flush_on_fsync: Flush on Frame sync
@ -431,6 +432,7 @@ struct xilinx_dma_chan {
bool genlock; bool genlock;
bool err; bool err;
bool idle; bool idle;
bool terminating;
struct tasklet_struct tasklet; struct tasklet_struct tasklet;
struct xilinx_vdma_config config; struct xilinx_vdma_config config;
bool flush_on_fsync; bool flush_on_fsync;
@ -1049,6 +1051,13 @@ static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
/* Run any dependencies, then free the descriptor */ /* Run any dependencies, then free the descriptor */
dma_run_dependencies(&desc->async_tx); dma_run_dependencies(&desc->async_tx);
xilinx_dma_free_tx_descriptor(chan, desc); xilinx_dma_free_tx_descriptor(chan, desc);
/*
* While we ran a callback the user called a terminate function,
* which takes care of cleaning up any remaining descriptors
*/
if (chan->terminating)
break;
} }
spin_unlock_irqrestore(&chan->lock, flags); spin_unlock_irqrestore(&chan->lock, flags);
@ -1965,6 +1974,8 @@ static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
if (desc->cyclic) if (desc->cyclic)
chan->cyclic = true; chan->cyclic = true;
chan->terminating = false;
spin_unlock_irqrestore(&chan->lock, flags); spin_unlock_irqrestore(&chan->lock, flags);
return cookie; return cookie;
@ -2436,6 +2447,7 @@ static int xilinx_dma_terminate_all(struct dma_chan *dchan)
xilinx_dma_chan_reset(chan); xilinx_dma_chan_reset(chan);
/* Remove and free all of the descriptors in the lists */ /* Remove and free all of the descriptors in the lists */
chan->terminating = true;
xilinx_dma_free_descriptors(chan); xilinx_dma_free_descriptors(chan);
chan->idle = true; chan->idle = true;

View File

@ -212,10 +212,9 @@ static int tee_bnxt_fw_probe(struct device *dev)
pvt_data.dev = dev; pvt_data.dev = dev;
fw_shm_pool = tee_shm_alloc(pvt_data.ctx, MAX_SHM_MEM_SZ, fw_shm_pool = tee_shm_alloc_kernel_buf(pvt_data.ctx, MAX_SHM_MEM_SZ);
TEE_SHM_MAPPED | TEE_SHM_DMA_BUF);
if (IS_ERR(fw_shm_pool)) { if (IS_ERR(fw_shm_pool)) {
dev_err(pvt_data.dev, "tee_shm_alloc failed\n"); dev_err(pvt_data.dev, "tee_shm_alloc_kernel_buf failed\n");
err = PTR_ERR(fw_shm_pool); err = PTR_ERR(fw_shm_pool);
goto out_sess; goto out_sess;
} }
@ -242,6 +241,14 @@ static int tee_bnxt_fw_remove(struct device *dev)
return 0; return 0;
} }
static void tee_bnxt_fw_shutdown(struct device *dev)
{
tee_shm_free(pvt_data.fw_shm_pool);
tee_client_close_session(pvt_data.ctx, pvt_data.session_id);
tee_client_close_context(pvt_data.ctx);
pvt_data.ctx = NULL;
}
static const struct tee_client_device_id tee_bnxt_fw_id_table[] = { static const struct tee_client_device_id tee_bnxt_fw_id_table[] = {
{UUID_INIT(0x6272636D, 0x2019, 0x0716, {UUID_INIT(0x6272636D, 0x2019, 0x0716,
0x42, 0x43, 0x4D, 0x5F, 0x53, 0x43, 0x48, 0x49)}, 0x42, 0x43, 0x4D, 0x5F, 0x53, 0x43, 0x48, 0x49)},
@ -257,6 +264,7 @@ static struct tee_client_driver tee_bnxt_fw_driver = {
.bus = &tee_bus_type, .bus = &tee_bus_type,
.probe = tee_bnxt_fw_probe, .probe = tee_bnxt_fw_probe,
.remove = tee_bnxt_fw_remove, .remove = tee_bnxt_fw_remove,
.shutdown = tee_bnxt_fw_shutdown,
}, },
}; };

View File

@ -953,6 +953,8 @@ static int fme_perf_offline_cpu(unsigned int cpu, struct hlist_node *node)
return 0; return 0;
priv->cpu = target; priv->cpu = target;
perf_pmu_migrate_context(&priv->pmu, cpu, target);
return 0; return 0;
} }

View File

@ -1040,7 +1040,7 @@ void amdgpu_acpi_detect(void)
*/ */
bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev) bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev)
{ {
#if defined(CONFIG_AMD_PMC) || defined(CONFIG_AMD_PMC_MODULE) #if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_PM_SLEEP)
if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) { if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {
if (adev->flags & AMD_IS_APU) if (adev->flags & AMD_IS_APU)
return pm_suspend_target_state == PM_SUSPEND_TO_IDLE; return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;

View File

@ -468,6 +468,46 @@ bool amdgpu_atomfirmware_dynamic_boot_config_supported(struct amdgpu_device *ade
return (fw_cap & ATOM_FIRMWARE_CAP_DYNAMIC_BOOT_CFG_ENABLE) ? true : false; return (fw_cap & ATOM_FIRMWARE_CAP_DYNAMIC_BOOT_CFG_ENABLE) ? true : false;
} }
/*
* Helper function to query RAS EEPROM address
*
* @adev: amdgpu_device pointer
*
* Return true if vbios supports ras rom address reporting
*/
bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev, uint8_t* i2c_address)
{
struct amdgpu_mode_info *mode_info = &adev->mode_info;
int index;
u16 data_offset, size;
union firmware_info *firmware_info;
u8 frev, crev;
if (i2c_address == NULL)
return false;
*i2c_address = 0;
index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
firmwareinfo);
if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context,
index, &size, &frev, &crev, &data_offset)) {
/* support firmware_info 3.4 + */
if ((frev == 3 && crev >=4) || (frev > 3)) {
firmware_info = (union firmware_info *)
(mode_info->atom_context->bios + data_offset);
*i2c_address = firmware_info->v34.ras_rom_i2c_slave_addr;
}
}
if (*i2c_address != 0)
return true;
return false;
}
union smu_info { union smu_info {
struct atom_smu_info_v3_1 v31; struct atom_smu_info_v3_1 v31;
}; };

View File

@ -36,6 +36,7 @@ int amdgpu_atomfirmware_get_clock_info(struct amdgpu_device *adev);
int amdgpu_atomfirmware_get_gfx_info(struct amdgpu_device *adev); int amdgpu_atomfirmware_get_gfx_info(struct amdgpu_device *adev);
bool amdgpu_atomfirmware_mem_ecc_supported(struct amdgpu_device *adev); bool amdgpu_atomfirmware_mem_ecc_supported(struct amdgpu_device *adev);
bool amdgpu_atomfirmware_sram_ecc_supported(struct amdgpu_device *adev); bool amdgpu_atomfirmware_sram_ecc_supported(struct amdgpu_device *adev);
bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev, uint8_t* i2c_address);
bool amdgpu_atomfirmware_mem_training_supported(struct amdgpu_device *adev); bool amdgpu_atomfirmware_mem_training_supported(struct amdgpu_device *adev);
bool amdgpu_atomfirmware_dynamic_boot_config_supported(struct amdgpu_device *adev); bool amdgpu_atomfirmware_dynamic_boot_config_supported(struct amdgpu_device *adev);
int amdgpu_atomfirmware_get_fw_reserved_fb_size(struct amdgpu_device *adev); int amdgpu_atomfirmware_get_fw_reserved_fb_size(struct amdgpu_device *adev);

View File

@ -299,6 +299,9 @@ int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
ip->major, ip->minor, ip->major, ip->minor,
ip->revision); ip->revision);
if (le16_to_cpu(ip->hw_id) == VCN_HWID)
adev->vcn.num_vcn_inst++;
for (k = 0; k < num_base_address; k++) { for (k = 0; k < num_base_address; k++) {
/* /*
* convert the endianness of base addresses in place, * convert the endianness of base addresses in place,
@ -385,7 +388,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
{ {
struct binary_header *bhdr; struct binary_header *bhdr;
struct harvest_table *harvest_info; struct harvest_table *harvest_info;
int i; int i, vcn_harvest_count = 0;
bhdr = (struct binary_header *)adev->mman.discovery_bin; bhdr = (struct binary_header *)adev->mman.discovery_bin;
harvest_info = (struct harvest_table *)(adev->mman.discovery_bin + harvest_info = (struct harvest_table *)(adev->mman.discovery_bin +
@ -397,8 +400,7 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
switch (le32_to_cpu(harvest_info->list[i].hw_id)) { switch (le32_to_cpu(harvest_info->list[i].hw_id)) {
case VCN_HWID: case VCN_HWID:
adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK; vcn_harvest_count++;
adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
break; break;
case DMU_HWID: case DMU_HWID:
adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK; adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
@ -407,6 +409,10 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
break; break;
} }
} }
if (vcn_harvest_count == adev->vcn.num_vcn_inst) {
adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK;
adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK;
}
} }
int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev) int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)

View File

@ -1213,6 +1213,13 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT}, {0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
{0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT}, {0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
/* BEIGE_GOBY */
{0x1002, 0x7420, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0x1002, 0x7421, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0x1002, 0x7422, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0x1002, 0x7423, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0x1002, 0x743F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY},
{0, 0, 0} {0, 0, 0}
}; };
@ -1564,6 +1571,8 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
pci_ignore_hotplug(pdev); pci_ignore_hotplug(pdev);
pci_set_power_state(pdev, PCI_D3cold); pci_set_power_state(pdev, PCI_D3cold);
drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF;
} else if (amdgpu_device_supports_boco(drm_dev)) {
/* nothing to do */
} else if (amdgpu_device_supports_baco(drm_dev)) { } else if (amdgpu_device_supports_baco(drm_dev)) {
amdgpu_device_baco_enter(drm_dev); amdgpu_device_baco_enter(drm_dev);
} }

View File

@ -26,6 +26,7 @@
#include "amdgpu_ras.h" #include "amdgpu_ras.h"
#include <linux/bits.h> #include <linux/bits.h>
#include "atom.h" #include "atom.h"
#include "amdgpu_atomfirmware.h"
#define EEPROM_I2C_TARGET_ADDR_VEGA20 0xA0 #define EEPROM_I2C_TARGET_ADDR_VEGA20 0xA0
#define EEPROM_I2C_TARGET_ADDR_ARCTURUS 0xA8 #define EEPROM_I2C_TARGET_ADDR_ARCTURUS 0xA8
@ -96,6 +97,9 @@ static bool __get_eeprom_i2c_addr(struct amdgpu_device *adev,
if (!i2c_addr) if (!i2c_addr)
return false; return false;
if (amdgpu_atomfirmware_ras_rom_addr(adev, (uint8_t*)i2c_addr))
return true;
switch (adev->asic_type) { switch (adev->asic_type) {
case CHIP_VEGA20: case CHIP_VEGA20:
*i2c_addr = EEPROM_I2C_TARGET_ADDR_VEGA20; *i2c_addr = EEPROM_I2C_TARGET_ADDR_VEGA20;

View File

@ -54,11 +54,12 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
{ {
struct drm_mm_node *node; struct drm_mm_node *node;
if (!res) { if (!res || res->mem_type == TTM_PL_SYSTEM) {
cur->start = start; cur->start = start;
cur->size = size; cur->size = size;
cur->remaining = size; cur->remaining = size;
cur->node = NULL; cur->node = NULL;
WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
return; return;
} }

View File

@ -1295,6 +1295,16 @@ static bool is_raven_kicker(struct amdgpu_device *adev)
return false; return false;
} }
static bool check_if_enlarge_doorbell_range(struct amdgpu_device *adev)
{
if ((adev->asic_type == CHIP_RENOIR) &&
(adev->gfx.me_fw_version >= 0x000000a5) &&
(adev->gfx.me_feature_version >= 52))
return true;
else
return false;
}
static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev) static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev)
{ {
if (gfx_v9_0_should_disable_gfxoff(adev->pdev)) if (gfx_v9_0_should_disable_gfxoff(adev->pdev))
@ -3675,7 +3685,16 @@ static int gfx_v9_0_kiq_init_register(struct amdgpu_ring *ring)
if (ring->use_doorbell) { if (ring->use_doorbell) {
WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER, WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER,
(adev->doorbell_index.kiq * 2) << 2); (adev->doorbell_index.kiq * 2) << 2);
WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER, /* If GC has entered CGPG, ringing doorbell > first page
* doesn't wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to
* workaround this issue. And this change has to align with firmware
* update.
*/
if (check_if_enlarge_doorbell_range(adev))
WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
(adev->doorbell.size - 4));
else
WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER,
(adev->doorbell_index.userqueue_end * 2) << 2); (adev->doorbell_index.userqueue_end * 2) << 2);
} }

View File

@ -1548,6 +1548,7 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
} }
hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data; hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data;
adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id = adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id =
@ -1561,7 +1562,6 @@ static int dm_dmub_sw_init(struct amdgpu_device *adev)
adev->dm.dmcub_fw_version); adev->dm.dmcub_fw_version);
} }
adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version);
adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL); adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL);
dmub_srv = adev->dm.dmub_srv; dmub_srv = adev->dm.dmub_srv;
@ -9605,7 +9605,12 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
} else if (amdgpu_freesync_vid_mode && aconnector && } else if (amdgpu_freesync_vid_mode && aconnector &&
is_freesync_video_mode(&new_crtc_state->mode, is_freesync_video_mode(&new_crtc_state->mode,
aconnector)) { aconnector)) {
set_freesync_fixed_config(dm_new_crtc_state); struct drm_display_mode *high_mode;
high_mode = get_highest_refresh_rate_mode(aconnector, false);
if (!drm_mode_equal(&new_crtc_state->mode, high_mode)) {
set_freesync_fixed_config(dm_new_crtc_state);
}
} }
ret = dm_atomic_get_state(state, &dm_state); ret = dm_atomic_get_state(state, &dm_state);

View File

@ -584,7 +584,7 @@ static void amdgpu_dm_irq_schedule_work(struct amdgpu_device *adev,
handler_data = container_of(handler_list->next, struct amdgpu_dm_irq_handler_data, list); handler_data = container_of(handler_list->next, struct amdgpu_dm_irq_handler_data, list);
/*allocate a new amdgpu_dm_irq_handler_data*/ /*allocate a new amdgpu_dm_irq_handler_data*/
handler_data_add = kzalloc(sizeof(*handler_data), GFP_KERNEL); handler_data_add = kzalloc(sizeof(*handler_data), GFP_ATOMIC);
if (!handler_data_add) { if (!handler_data_add) {
DRM_ERROR("DM_IRQ: failed to allocate irq handler!\n"); DRM_ERROR("DM_IRQ: failed to allocate irq handler!\n");
return; return;

View File

@ -66,9 +66,11 @@ int rn_get_active_display_cnt_wa(
for (i = 0; i < context->stream_count; i++) { for (i = 0; i < context->stream_count; i++) {
const struct dc_stream_state *stream = context->streams[i]; const struct dc_stream_state *stream = context->streams[i];
/* Extend the WA to DP for Linux*/
if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A || if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A ||
stream->signal == SIGNAL_TYPE_DVI_SINGLE_LINK || stream->signal == SIGNAL_TYPE_DVI_SINGLE_LINK ||
stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK) stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK ||
stream->signal == SIGNAL_TYPE_DISPLAY_PORT)
tmds_present = true; tmds_present = true;
} }

View File

@ -3602,29 +3602,12 @@ static bool dpcd_read_sink_ext_caps(struct dc_link *link)
bool dp_retrieve_lttpr_cap(struct dc_link *link) bool dp_retrieve_lttpr_cap(struct dc_link *link)
{ {
uint8_t lttpr_dpcd_data[6]; uint8_t lttpr_dpcd_data[6];
bool vbios_lttpr_enable = false; bool vbios_lttpr_enable = link->dc->caps.vbios_lttpr_enable;
bool vbios_lttpr_interop = false; bool vbios_lttpr_interop = link->dc->caps.vbios_lttpr_aware;
struct dc_bios *bios = link->dc->ctx->dc_bios;
enum dc_status status = DC_ERROR_UNEXPECTED; enum dc_status status = DC_ERROR_UNEXPECTED;
bool is_lttpr_present = false; bool is_lttpr_present = false;
memset(lttpr_dpcd_data, '\0', sizeof(lttpr_dpcd_data)); memset(lttpr_dpcd_data, '\0', sizeof(lttpr_dpcd_data));
/* Query BIOS to determine if LTTPR functionality is forced on by system */
if (bios->funcs->get_lttpr_caps) {
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = bios->funcs->get_lttpr_caps(bios, &is_vbios_lttpr_enable);
vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
}
if (bios->funcs->get_lttpr_interop) {
enum bp_result bp_query_result;
uint8_t is_vbios_interop_enabled = 0;
bp_query_result = bios->funcs->get_lttpr_interop(bios, &is_vbios_interop_enabled);
vbios_lttpr_interop = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
}
/* /*
* Logic to determine LTTPR mode * Logic to determine LTTPR mode

View File

@ -183,6 +183,8 @@ struct dc_caps {
unsigned int cursor_cache_size; unsigned int cursor_cache_size;
struct dc_plane_cap planes[MAX_PLANES]; struct dc_plane_cap planes[MAX_PLANES];
struct dc_color_caps color; struct dc_color_caps color;
bool vbios_lttpr_aware;
bool vbios_lttpr_enable;
}; };
struct dc_bug_wa { struct dc_bug_wa {

View File

@ -464,7 +464,7 @@ void optc2_lock_doublebuffer_enable(struct timing_generator *optc)
REG_UPDATE_2(OTG_GLOBAL_CONTROL1, REG_UPDATE_2(OTG_GLOBAL_CONTROL1,
MASTER_UPDATE_LOCK_DB_X, MASTER_UPDATE_LOCK_DB_X,
h_blank_start - 200 - 1, (h_blank_start - 200 - 1) / optc1->opp_count,
MASTER_UPDATE_LOCK_DB_Y, MASTER_UPDATE_LOCK_DB_Y,
v_blank_start - 1); v_blank_start - 1);
} }

View File

@ -1788,7 +1788,6 @@ static bool dcn30_split_stream_for_mpc_or_odm(
} }
pri_pipe->next_odm_pipe = sec_pipe; pri_pipe->next_odm_pipe = sec_pipe;
sec_pipe->prev_odm_pipe = pri_pipe; sec_pipe->prev_odm_pipe = pri_pipe;
ASSERT(sec_pipe->top_pipe == NULL);
if (!sec_pipe->top_pipe) if (!sec_pipe->top_pipe)
sec_pipe->stream_res.opp = pool->opps[pipe_idx]; sec_pipe->stream_res.opp = pool->opps[pipe_idx];
@ -2617,6 +2616,26 @@ static bool dcn30_resource_construct(
dc->caps.color.mpc.ogam_rom_caps.hlg = 0; dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; dc->caps.color.mpc.ocsc = 1;
/* read VBIOS LTTPR caps */
{
if (ctx->dc_bios->funcs->get_lttpr_caps) {
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
}
if (ctx->dc_bios->funcs->get_lttpr_interop) {
enum bp_result bp_query_result;
uint8_t is_vbios_interop_enabled = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios,
&is_vbios_interop_enabled);
dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
}
}
if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV) if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV)
dc->debug = debug_defaults_drv; dc->debug = debug_defaults_drv;
else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) { else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) {

View File

@ -146,8 +146,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_03_soc = {
.min_dcfclk = 500.0, /* TODO: set this to actual min DCFCLK */ .min_dcfclk = 500.0, /* TODO: set this to actual min DCFCLK */
.num_states = 1, .num_states = 1,
.sr_exit_time_us = 26.5, .sr_exit_time_us = 35.5,
.sr_enter_plus_exit_time_us = 31, .sr_enter_plus_exit_time_us = 40,
.urgent_latency_us = 4.0, .urgent_latency_us = 4.0,
.urgent_latency_pixel_data_only_us = 4.0, .urgent_latency_pixel_data_only_us = 4.0,
.urgent_latency_pixel_mixed_with_vm_data_us = 4.0, .urgent_latency_pixel_mixed_with_vm_data_us = 4.0,

View File

@ -1968,6 +1968,22 @@ static bool dcn31_resource_construct(
dc->caps.color.mpc.ogam_rom_caps.hlg = 0; dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; dc->caps.color.mpc.ocsc = 1;
/* read VBIOS LTTPR caps */
{
if (ctx->dc_bios->funcs->get_lttpr_caps) {
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
}
/* interop bit is implicit */
{
dc->caps.vbios_lttpr_aware = true;
}
}
if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV) if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV)
dc->debug = debug_defaults_drv; dc->debug = debug_defaults_drv;
else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) { else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) {

View File

@ -267,11 +267,13 @@ void dmub_dcn31_set_outbox1_rptr(struct dmub_srv *dmub, uint32_t rptr_offset)
bool dmub_dcn31_is_hw_init(struct dmub_srv *dmub) bool dmub_dcn31_is_hw_init(struct dmub_srv *dmub)
{ {
uint32_t is_hw_init; union dmub_fw_boot_status status;
uint32_t is_enable;
REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_hw_init); status.all = REG_READ(DMCUB_SCRATCH0);
REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_enable);
return is_hw_init != 0; return is_enable != 0 && status.bits.dal_fw;
} }
bool dmub_dcn31_is_supported(struct dmub_srv *dmub) bool dmub_dcn31_is_supported(struct dmub_srv *dmub)

View File

@ -590,7 +590,7 @@ struct atom_firmware_info_v3_4 {
uint8_t board_i2c_feature_id; // enum of atom_board_i2c_feature_id_def uint8_t board_i2c_feature_id; // enum of atom_board_i2c_feature_id_def
uint8_t board_i2c_feature_gpio_id; // i2c id find in gpio_lut data table gpio_id uint8_t board_i2c_feature_gpio_id; // i2c id find in gpio_lut data table gpio_id
uint8_t board_i2c_feature_slave_addr; uint8_t board_i2c_feature_slave_addr;
uint8_t reserved3; uint8_t ras_rom_i2c_slave_addr;
uint16_t bootup_mvddq_mv; uint16_t bootup_mvddq_mv;
uint16_t bootup_mvpp_mv; uint16_t bootup_mvpp_mv;
uint32_t zfbstartaddrin16mb; uint32_t zfbstartaddrin16mb;

View File

@ -26,7 +26,7 @@
#include "amdgpu_smu.h" #include "amdgpu_smu.h"
#define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF #define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF
#define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x03 #define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x04
#define SMU13_DRIVER_IF_VERSION_ALDE 0x07 #define SMU13_DRIVER_IF_VERSION_ALDE 0x07
/* MP Apertures */ /* MP Apertures */

View File

@ -111,7 +111,9 @@ typedef struct {
uint32_t InWhisperMode : 1; uint32_t InWhisperMode : 1;
uint32_t spare0 : 1; uint32_t spare0 : 1;
uint32_t ZstateStatus : 4; uint32_t ZstateStatus : 4;
uint32_t spare1 :12; uint32_t spare1 : 4;
uint32_t DstateFun : 4;
uint32_t DstateDev : 4;
// MP1_EXT_SCRATCH2 // MP1_EXT_SCRATCH2
uint32_t P2JobHandler :24; uint32_t P2JobHandler :24;
uint32_t RsmuPmiP2FinishedCnt : 8; uint32_t RsmuPmiP2FinishedCnt : 8;

View File

@ -353,8 +353,7 @@ static void sienna_cichlid_check_bxco_support(struct smu_context *smu)
struct amdgpu_device *adev = smu->adev; struct amdgpu_device *adev = smu->adev;
uint32_t val; uint32_t val;
if (powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_BACO || if (powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_BACO) {
powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_MACO) {
val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0); val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0);
smu_baco->platform_support = smu_baco->platform_support =
(val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true : (val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true :

Some files were not shown because too many files have changed in this diff Show More