Linux 4.8-rc3

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJXujXLAAoJEHm+PkMAQRiGnvcH/3HRYWZijAiKZ/epvnzyCXPc
 iK0gjuhWynEUDN52UxOkPAS7P/bF64gDDYy880cGUDV5K4Cq1a9T+HXzK47r3hLc
 AVeUXrwHGX2ftW75YagnJZTg6R2aFf+T9QZkx9btDckQuHhz8r4ww/r9RrWzNBWT
 71hl5xUSIkGz+6hGrg7Fbxeff/6huat3et3aXUkCdMVG43C9wRWWZ3EHVLb9tpmV
 yHcl8uCbw0HSfQcvNYZge7ShM5E0BIW97/l+A3KTKoYhYGqAJ1vAbGVMTRqfNBXj
 IYSdOjWOSw4apIK/3pzAPE3lAlymm97XEDnGZNQg5GPDvvmx5COAGFOrhOuN00k=
 =PCzp
 -----END PGP SIGNATURE-----

Merge tag 'v4.8-rc3' into x86/asm, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Ingo Molnar 2016-08-24 12:11:29 +02:00
commit eb4e841099
277 changed files with 3915 additions and 1791 deletions

View File

@ -131,7 +131,7 @@ pygments_style = 'sphinx'
todo_include_todos = False
primary_domain = 'C'
highlight_language = 'C'
highlight_language = 'guess'
# -- Options for HTML output ----------------------------------------------

View File

@ -19,5 +19,5 @@ enhancements. It can monitor up to 4 voltages, 16 temperatures and
implemented in this driver.
Specification of the chip can be found here:
ftp:///pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/BMC-Teutates_Specification_V1.21.pdf
ftp:///pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/Fujitsu_mainboards-1-Sensors_HowTo-en-US.pdf
ftp://ftp.ts.fujitsu.com/pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/BMC-Teutates_Specification_V1.21.pdf
ftp://ftp.ts.fujitsu.com/pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/Fujitsu_mainboards-1-Sensors_HowTo-en-US.pdf

View File

@ -366,8 +366,6 @@ Domain`_ references.
Cross-referencing from reStructuredText
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. highlight:: none
To cross-reference the functions and types defined in the kernel-doc comments
from reStructuredText documents, please use the `Sphinx C Domain`_
references. For example::
@ -390,8 +388,6 @@ For further details, please refer to the `Sphinx C Domain`_ documentation.
Function documentation
----------------------
.. highlight:: c
The general format of a function and function-like macro kernel-doc comment is::
/**
@ -572,8 +568,6 @@ DocBook XML [DEPRECATED]
Converting DocBook to Sphinx
----------------------------
.. highlight:: none
Over time, we expect all of the documents under ``Documentation/DocBook`` to be
converted to Sphinx and reStructuredText. For most DocBook XML documents, a good
enough solution is to use the simple ``Documentation/sphinx/tmplcvt`` script,

View File

@ -790,13 +790,12 @@ The kernel interface functions are as follows:
Data messages can have their contents extracted with the usual bunch of
socket buffer manipulation functions. A data message can be determined to
be the last one in a sequence with rxrpc_kernel_is_data_last(). When a
data message has been used up, rxrpc_kernel_data_delivered() should be
called on it..
data message has been used up, rxrpc_kernel_data_consumed() should be
called on it.
Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
of. It is possible to get extra refs on all types of message for later
freeing, but this may pin the state of a call until the message is finally
freed.
Messages should be handled to rxrpc_kernel_free_skb() to dispose of. It
is possible to get extra refs on all types of message for later freeing,
but this may pin the state of a call until the message is finally freed.
(*) Accept an incoming call.
@ -821,12 +820,14 @@ The kernel interface functions are as follows:
Other errors may be returned if the call had been aborted (-ECONNABORTED)
or had timed out (-ETIME).
(*) Record the delivery of a data message and free it.
(*) Record the delivery of a data message.
void rxrpc_kernel_data_delivered(struct sk_buff *skb);
void rxrpc_kernel_data_consumed(struct rxrpc_call *call,
struct sk_buff *skb);
This is used to record a data message as having been delivered and to
update the ACK state for the call. The socket buffer will be freed.
This is used to record a data message as having been consumed and to
update the ACK state for the call. The message must still be passed to
rxrpc_kernel_free_skb() for disposal by the caller.
(*) Free a message.

View File

@ -164,7 +164,32 @@ load n/2 modules more and try again.
Again, if you find the offending module(s), it(they) must be unloaded every time
before hibernation, and please report the problem with it(them).
c) Advanced debugging
c) Using the "test_resume" hibernation option
/sys/power/disk generally tells the kernel what to do after creating a
hibernation image. One of the available options is "test_resume" which
causes the just created image to be used for immediate restoration. Namely,
after doing:
# echo test_resume > /sys/power/disk
# echo disk > /sys/power/state
a hibernation image will be created and a resume from it will be triggered
immediately without involving the platform firmware in any way.
That test can be used to check if failures to resume from hibernation are
related to bad interactions with the platform firmware. That is, if the above
works every time, but resume from actual hibernation does not work or is
unreliable, the platform firmware may be responsible for the failures.
On architectures and platforms that support using different kernels to restore
hibernation images (that is, the kernel used to read the image from storage and
load it into memory is different from the one included in the image) or support
kernel address space randomization, it also can be used to check if failures
to resume may be related to the differences between the restore and image
kernels.
d) Advanced debugging
In case that hibernation does not work on your system even in the minimal
configuration and compiling more drivers as modules is not practical or some

View File

@ -1,75 +1,76 @@
Power Management Interface
Power Management Interface for System Sleep
Copyright (c) 2016 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The power management subsystem provides a unified sysfs interface to
userspace, regardless of what architecture or platform one is
running. The interface exists in /sys/power/ directory (assuming sysfs
is mounted at /sys).
The power management subsystem provides userspace with a unified sysfs interface
for system sleep regardless of the underlying system architecture or platform.
The interface is located in the /sys/power/ directory (assuming that sysfs is
mounted at /sys).
/sys/power/state controls system power state. Reading from this file
returns what states are supported, which is hard-coded to 'freeze',
'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk'
(Suspend-to-Disk).
/sys/power/state is the system sleep state control file.
Writing to this file one of those strings causes the system to
transition into that state. Please see the file
Documentation/power/states.txt for a description of each of those
states.
Reading from it returns a list of supported sleep states, encoded as:
'freeze' (Suspend-to-Idle)
'standby' (Power-On Suspend)
'mem' (Suspend-to-RAM)
'disk' (Suspend-to-Disk)
/sys/power/disk controls the operating mode of the suspend-to-disk
mechanism. Suspend-to-disk can be handled in several ways. We have a
few options for putting the system to sleep - using the platform driver
(e.g. ACPI or other suspend_ops), powering off the system or rebooting the
system (for testing).
Suspend-to-Idle is always supported. Suspend-to-Disk is always supported
too as long the kernel has been configured to support hibernation at all
(ie. CONFIG_HIBERNATION is set in the kernel configuration file). Support
for Suspend-to-RAM and Power-On Suspend depends on the capabilities of the
platform.
Additionally, /sys/power/disk can be used to turn on one of the two testing
modes of the suspend-to-disk mechanism: 'testproc' or 'test'. If the
suspend-to-disk mechanism is in the 'testproc' mode, writing 'disk' to
/sys/power/state will cause the kernel to disable nonboot CPUs and freeze
tasks, wait for 5 seconds, unfreeze tasks and enable nonboot CPUs. If it is
in the 'test' mode, writing 'disk' to /sys/power/state will cause the kernel
to disable nonboot CPUs and freeze tasks, shrink memory, suspend devices, wait
for 5 seconds, resume devices, unfreeze tasks and enable nonboot CPUs. Then,
we are able to look in the log messages and work out, for example, which code
is being slow and which device drivers are misbehaving.
If one of the strings listed in /sys/power/state is written to it, the system
will attempt to transition into the corresponding sleep state. Refer to
Documentation/power/states.txt for a description of each of those states.
Reading from this file will display all supported modes and the currently
selected one in brackets, for example
/sys/power/disk controls the operating mode of hibernation (Suspend-to-Disk).
Specifically, it tells the kernel what to do after creating a hibernation image.
[shutdown] reboot test testproc
Reading from it returns a list of supported options encoded as:
Writing to this file will accept one of
'platform' (put the system into sleep using a platform-provided method)
'shutdown' (shut the system down)
'reboot' (reboot the system)
'suspend' (trigger a Suspend-to-RAM transition)
'test_resume' (resume-after-hibernation test mode)
'platform' (only if the platform supports it)
'shutdown'
'reboot'
'testproc'
'test'
The currently selected option is printed in square brackets.
/sys/power/image_size controls the size of the image created by
the suspend-to-disk mechanism. It can be written a string
representing a non-negative integer that will be used as an upper
limit of the image size, in bytes. The suspend-to-disk mechanism will
do its best to ensure the image size will not exceed that number. However,
if this turns out to be impossible, it will try to suspend anyway using the
smallest image possible. In particular, if "0" is written to this file, the
suspend image will be as small as possible.
The 'platform' option is only available if the platform provides a special
mechanism to put the system to sleep after creating a hibernation image (ACPI
does that, for example). The 'suspend' option is available if Suspend-to-RAM
is supported. Refer to Documentation/power/basic_pm_debugging.txt for the
description of the 'test_resume' option.
Reading from this file will display the current image size limit, which
is set to 2/5 of available RAM by default.
To select an option, write the string representing it to /sys/power/disk.
/sys/power/pm_trace controls the code which saves the last PM event point in
the RTC across reboots, so that you can debug a machine that just hangs
during suspend (or more commonly, during resume). Namely, the RTC is only
used to save the last PM event point if this file contains '1'. Initially it
contains '0' which may be changed to '1' by writing a string representing a
nonzero integer into it.
/sys/power/image_size controls the size of hibernation images.
To use this debugging feature you should attempt to suspend the machine, then
reboot it and run
It can be written a string representing a non-negative integer that will be
used as a best-effort upper limit of the image size, in bytes. The hibernation
core will do its best to ensure that the image size will not exceed that number.
However, if that turns out to be impossible to achieve, a hibernation image will
still be created and its size will be as small as possible. In particular,
writing '0' to this file will enforce hibernation images to be as small as
possible.
dmesg -s 1000000 | grep 'hash matches'
Reading from this file returns the current image size limit, which is set to
around 2/5 of available RAM by default.
CAUTION: Using it will cause your machine's real-time (CMOS) clock to be
set to a random invalid time after a resume.
/sys/power/pm_trace controls the PM trace mechanism saving the last suspend
or resume event point in the RTC across reboots.
It helps to debug hard lockups or reboots due to device driver failures that
occur during system suspend or resume (which is more common) more effectively.
If /sys/power/pm_trace contains '1', the fingerprint of each suspend/resume
event point in turn will be stored in the RTC memory (overwriting the actual
RTC information), so it will survive a system crash if one occurs right after
storing it and it can be used later to identify the driver that caused the crash
to happen (see Documentation/power/s2ram.txt for more information).
Initially it contains '0' which may be changed to '1' by writing a string
representing a nonzero integer into it.

View File

@ -42,11 +42,12 @@
caption a.headerlink { opacity: 0; }
caption a.headerlink:hover { opacity: 1; }
/* inline literal: drop the borderbox and red color */
/* inline literal: drop the borderbox, padding and red color */
code, .rst-content tt, .rst-content code {
color: inherit;
border: none;
padding: unset;
background: inherit;
font-size: 85%;
}

View File

@ -4525,6 +4525,12 @@ L: linux-edac@vger.kernel.org
S: Maintained
F: drivers/edac/sb_edac.c
EDAC-SKYLAKE
M: Tony Luck <tony.luck@intel.com>
L: linux-edac@vger.kernel.org
S: Maintained
F: drivers/edac/skx_edac.c
EDAC-XGENE
APPLIED MICRO (APM) X-GENE SOC EDAC
M: Loc Ho <lho@apm.com>

View File

@ -1,7 +1,7 @@
VERSION = 4
PATCHLEVEL = 8
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Psychotic Stoned Sheep
# *DOCUMENTATION*

View File

@ -295,6 +295,7 @@ __und_svc_fault:
bl __und_fault
__und_svc_finish:
get_thread_info tsk
ldr r5, [sp, #S_PSR] @ Get SVC cpsr
svc_exit r5 @ return from exception
UNWIND(.fnend )

View File

@ -271,6 +271,12 @@ static int __init imx_gpc_init(struct device_node *node,
for (i = 0; i < IMR_NUM; i++)
writel_relaxed(~0, gpc_base + GPC_IMR1 + i * 4);
/*
* Clear the OF_POPULATED flag set in of_irq_init so that
* later the GPC power domain driver will not be skipped.
*/
of_node_clear_flag(node, OF_POPULATED);
return 0;
}
IRQCHIP_DECLARE(imx_gpc, "fsl,imx6q-gpc", imx_gpc_init);

View File

@ -728,7 +728,8 @@ static void *__init late_alloc(unsigned long sz)
{
void *ptr = (void *)__get_free_pages(PGALLOC_GFP, get_order(sz));
BUG_ON(!ptr);
if (!ptr || !pgtable_page_ctor(virt_to_page(ptr)))
BUG();
return ptr;
}
@ -1155,10 +1156,19 @@ void __init sanity_check_meminfo(void)
{
phys_addr_t memblock_limit = 0;
int highmem = 0;
phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
u64 vmalloc_limit;
struct memblock_region *reg;
bool should_use_highmem = false;
/*
* Let's use our own (unoptimized) equivalent of __pa() that is
* not affected by wrap-arounds when sizeof(phys_addr_t) == 4.
* The result is used as the upper bound on physical memory address
* and may itself be outside the valid range for which phys_addr_t
* and therefore __pa() is defined.
*/
vmalloc_limit = (u64)(uintptr_t)vmalloc_min - PAGE_OFFSET + PHYS_OFFSET;
for_each_memblock(memory, reg) {
phys_addr_t block_start = reg->base;
phys_addr_t block_end = reg->base + reg->size;
@ -1183,10 +1193,11 @@ void __init sanity_check_meminfo(void)
if (reg->size > size_limit) {
phys_addr_t overlap_size = reg->size - size_limit;
pr_notice("Truncating RAM at %pa-%pa to -%pa",
&block_start, &block_end, &vmalloc_limit);
memblock_remove(vmalloc_limit, overlap_size);
pr_notice("Truncating RAM at %pa-%pa",
&block_start, &block_end);
block_end = vmalloc_limit;
pr_cont(" to -%pa", &block_end);
memblock_remove(vmalloc_limit, overlap_size);
should_use_highmem = true;
}
}

View File

@ -101,12 +101,20 @@ ENTRY(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly
/* enable the MMU early - so we can access sleep_save_stash by va */
adr_l lr, __enable_mmu /* __cpu_setup will return here */
ldr x27, =_cpu_resume /* __enable_mmu will branch here */
adr_l x27, _resume_switched /* __enable_mmu will branch here */
adrp x25, idmap_pg_dir
adrp x26, swapper_pg_dir
b __cpu_setup
ENDPROC(cpu_resume)
.pushsection ".idmap.text", "ax"
_resume_switched:
ldr x8, =_cpu_resume
br x8
ENDPROC(_resume_switched)
.ltorg
.popsection
ENTRY(_cpu_resume)
mrs x1, mpidr_el1
adrp x8, mpidr_hash

View File

@ -242,7 +242,7 @@ static void note_page(struct pg_state *st, unsigned long addr, unsigned level,
static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start)
{
pte_t *pte = pte_offset_kernel(pmd, 0);
pte_t *pte = pte_offset_kernel(pmd, 0UL);
unsigned long addr;
unsigned i;
@ -254,7 +254,7 @@ static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start)
static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
{
pmd_t *pmd = pmd_offset(pud, 0);
pmd_t *pmd = pmd_offset(pud, 0UL);
unsigned long addr;
unsigned i;
@ -271,7 +271,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
{
pud_t *pud = pud_offset(pgd, 0);
pud_t *pud = pud_offset(pgd, 0UL);
unsigned long addr;
unsigned i;

View File

@ -23,6 +23,8 @@
#include <linux/module.h>
#include <linux/of.h>
#include <asm/acpi.h>
struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
EXPORT_SYMBOL(node_data);
nodemask_t numa_nodes_parsed __initdata;

View File

@ -97,10 +97,10 @@
#define ENOTCONN 235 /* Transport endpoint is not connected */
#define ESHUTDOWN 236 /* Cannot send after transport endpoint shutdown */
#define ETOOMANYREFS 237 /* Too many references: cannot splice */
#define EREFUSED ECONNREFUSED /* for HP's NFS apparently */
#define ETIMEDOUT 238 /* Connection timed out */
#define ECONNREFUSED 239 /* Connection refused */
#define EREMOTERELEASE 240 /* Remote peer released connection */
#define EREFUSED ECONNREFUSED /* for HP's NFS apparently */
#define EREMOTERELEASE 240 /* Remote peer released connection */
#define EHOSTDOWN 241 /* Host is down */
#define EHOSTUNREACH 242 /* No route to host */

View File

@ -51,8 +51,6 @@ EXPORT_SYMBOL(_parisc_requires_coherency);
DEFINE_PER_CPU(struct cpuinfo_parisc, cpu_data);
extern int update_cr16_clocksource(void); /* from time.c */
/*
** PARISC CPU driver - claim "device" and initialize CPU data structures.
**
@ -228,12 +226,6 @@ static int processor_probe(struct parisc_device *dev)
}
#endif
/* If we've registered more than one cpu,
* we'll use the jiffies clocksource since cr16
* is not synchronized between CPUs.
*/
update_cr16_clocksource();
return 0;
}

View File

@ -221,18 +221,6 @@ static struct clocksource clocksource_cr16 = {
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
int update_cr16_clocksource(void)
{
/* since the cr16 cycle counters are not synchronized across CPUs,
we'll check if we should switch to a safe clocksource: */
if (clocksource_cr16.rating != 0 && num_online_cpus() > 1) {
clocksource_change_rating(&clocksource_cr16, 0);
return 1;
}
return 0;
}
void __init start_cpu_itimer(void)
{
unsigned int cpu = smp_processor_id();

View File

@ -21,16 +21,21 @@ ENTRY(startup_continue)
lg %r15,.Lstack-.LPG1(%r13)
aghi %r15,-160
brasl %r14,decompress_kernel
# setup registers for memory mover & branch to target
# Set up registers for memory mover. We move the decompressed image to
# 0x11000, starting at offset 0x11000 in the decompressed image so
# that code living at 0x11000 in the image will end up at 0x11000 in
# memory.
lgr %r4,%r2
lg %r2,.Loffset-.LPG1(%r13)
la %r4,0(%r2,%r4)
lg %r3,.Lmvsize-.LPG1(%r13)
lgr %r5,%r3
# move the memory mover someplace safe
# Move the memory mover someplace safe so it doesn't overwrite itself.
la %r1,0x200
mvc 0(mover_end-mover,%r1),mover-.LPG1(%r13)
# decompress image is started at 0x11000
# When the memory mover is done we pass control to
# arch/s390/kernel/head64.S:startup_continue which lives at 0x11000 in
# the decompressed image.
lgr %r6,%r2
br %r1
mover:

View File

@ -678,7 +678,7 @@ CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=m
CONFIG_CRYPTO_CRC32_S390=y
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m
CONFIG_X509_CERTIFICATE_PARSER=m

View File

@ -616,7 +616,7 @@ CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=m
CONFIG_CRYPTO_CRC32_S390=y
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m
CONFIG_X509_CERTIFICATE_PARSER=m

View File

@ -615,7 +615,7 @@ CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=m
CONFIG_CRYPTO_CRC32_S390=y
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m
CONFIG_X509_CERTIFICATE_PARSER=m

View File

@ -51,6 +51,9 @@ u32 crc32c_le_vgfm_16(u32 crc, unsigned char const *buf, size_t size);
struct kernel_fpu vxstate; \
unsigned long prealign, aligned, remaining; \
\
if (datalen < VX_MIN_LEN + VX_ALIGN_MASK) \
return ___crc32_sw(crc, data, datalen); \
\
if ((unsigned long)data & VX_ALIGN_MASK) { \
prealign = VX_ALIGNMENT - \
((unsigned long)data & VX_ALIGN_MASK); \
@ -59,9 +62,6 @@ u32 crc32c_le_vgfm_16(u32 crc, unsigned char const *buf, size_t size);
data = (void *)((unsigned long)data + prealign); \
} \
\
if (datalen < VX_MIN_LEN) \
return ___crc32_sw(crc, data, datalen); \
\
aligned = datalen & ~VX_ALIGN_MASK; \
remaining = datalen & VX_ALIGN_MASK; \
\

View File

@ -234,7 +234,7 @@ CONFIG_CRYPTO_SHA256_S390=m
CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_CRC32_S390=m
CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRC7=m
# CONFIG_XZ_DEC_X86 is not set
# CONFIG_XZ_DEC_POWERPC is not set

View File

@ -309,7 +309,9 @@ ENTRY(startup_kdump)
l %r15,.Lstack-.LPG0(%r13)
ahi %r15,-STACK_FRAME_OVERHEAD
brasl %r14,verify_facilities
/* Continue with startup code in head64.S */
# For uncompressed images, continue in
# arch/s390/kernel/head64.S. For compressed images, continue in
# arch/s390/boot/compressed/head.S.
jg startup_continue
.Lstack:

View File

@ -237,11 +237,10 @@ char * strrchr(const char * s, int c)
EXPORT_SYMBOL(strrchr);
static inline int clcle(const char *s1, unsigned long l1,
const char *s2, unsigned long l2,
int *diff)
const char *s2, unsigned long l2)
{
register unsigned long r2 asm("2") = (unsigned long) s1;
register unsigned long r3 asm("3") = (unsigned long) l2;
register unsigned long r3 asm("3") = (unsigned long) l1;
register unsigned long r4 asm("4") = (unsigned long) s2;
register unsigned long r5 asm("5") = (unsigned long) l2;
int cc;
@ -252,7 +251,6 @@ static inline int clcle(const char *s1, unsigned long l1,
" srl %0,28"
: "=&d" (cc), "+a" (r2), "+a" (r3),
"+a" (r4), "+a" (r5) : : "cc");
*diff = *(char *)r2 - *(char *)r4;
return cc;
}
@ -270,9 +268,9 @@ char * strstr(const char * s1,const char * s2)
return (char *) s1;
l1 = __strend(s1) - s1;
while (l1-- >= l2) {
int cc, dummy;
int cc;
cc = clcle(s1, l1, s2, l2, &dummy);
cc = clcle(s1, l2, s2, l2);
if (!cc)
return (char *) s1;
s1++;
@ -313,11 +311,11 @@ EXPORT_SYMBOL(memchr);
*/
int memcmp(const void *cs, const void *ct, size_t n)
{
int ret, diff;
int ret;
ret = clcle(cs, n, ct, n, &diff);
ret = clcle(cs, n, ct, n);
if (ret)
ret = diff;
ret = ret == 1 ? -1 : 1;
return ret;
}
EXPORT_SYMBOL(memcmp);

View File

@ -252,6 +252,8 @@ static int change_page_attr(unsigned long addr, unsigned long end,
int rc = -EINVAL;
pgd_t *pgdp;
if (addr == end)
return 0;
if (end >= MODULES_END)
return -EINVAL;
mutex_lock(&cpa_mutex);

View File

@ -113,7 +113,7 @@ static int set_up_temporary_mappings(void)
return result;
}
temp_level4_pgt = (unsigned long)pgd - __PAGE_OFFSET;
temp_level4_pgt = __pa(pgd);
return 0;
}

View File

@ -439,7 +439,7 @@ config CRYPTO_CRC32C_INTEL
config CRYPT_CRC32C_VPMSUM
tristate "CRC32c CRC algorithm (powerpc64)"
depends on PPC64
depends on PPC64 && ALTIVEC
select CRYPTO_HASH
select CRC32
help

View File

@ -24,14 +24,14 @@
#define ROTL64(x, y) (((x) << (y)) | ((x) >> (64 - (y))))
static const u64 keccakf_rndc[24] = {
0x0000000000000001, 0x0000000000008082, 0x800000000000808a,
0x8000000080008000, 0x000000000000808b, 0x0000000080000001,
0x8000000080008081, 0x8000000000008009, 0x000000000000008a,
0x0000000000000088, 0x0000000080008009, 0x000000008000000a,
0x000000008000808b, 0x800000000000008b, 0x8000000000008089,
0x8000000000008003, 0x8000000000008002, 0x8000000000000080,
0x000000000000800a, 0x800000008000000a, 0x8000000080008081,
0x8000000000008080, 0x0000000080000001, 0x8000000080008008
0x0000000000000001ULL, 0x0000000000008082ULL, 0x800000000000808aULL,
0x8000000080008000ULL, 0x000000000000808bULL, 0x0000000080000001ULL,
0x8000000080008081ULL, 0x8000000000008009ULL, 0x000000000000008aULL,
0x0000000000000088ULL, 0x0000000080008009ULL, 0x000000008000000aULL,
0x000000008000808bULL, 0x800000000000008bULL, 0x8000000000008089ULL,
0x8000000000008003ULL, 0x8000000000008002ULL, 0x8000000000000080ULL,
0x000000000000800aULL, 0x800000008000000aULL, 0x8000000080008081ULL,
0x8000000000008080ULL, 0x0000000080000001ULL, 0x8000000080008008ULL
};
static const int keccakf_rotc[24] = {

View File

@ -66,10 +66,10 @@ static void kona_timer_disable_and_clear(void __iomem *base)
}
static void
static int
kona_timer_get_counter(void __iomem *timer_base, uint32_t *msw, uint32_t *lsw)
{
int loop_limit = 4;
int loop_limit = 3;
/*
* Read 64-bit free running counter
@ -83,18 +83,19 @@ kona_timer_get_counter(void __iomem *timer_base, uint32_t *msw, uint32_t *lsw)
* if new hi-word is equal to previously read hi-word then stop.
*/
while (--loop_limit) {
do {
*msw = readl(timer_base + KONA_GPTIMER_STCHI_OFFSET);
*lsw = readl(timer_base + KONA_GPTIMER_STCLO_OFFSET);
if (*msw == readl(timer_base + KONA_GPTIMER_STCHI_OFFSET))
break;
}
} while (--loop_limit);
if (!loop_limit) {
pr_err("bcm_kona_timer: getting counter failed.\n");
pr_err(" Timer will be impacted\n");
return -ETIMEDOUT;
}
return;
return 0;
}
static int kona_timer_set_next_event(unsigned long clc,
@ -112,8 +113,11 @@ static int kona_timer_set_next_event(unsigned long clc,
uint32_t lsw, msw;
uint32_t reg;
int ret;
kona_timer_get_counter(timers.tmr_regs, &msw, &lsw);
ret = kona_timer_get_counter(timers.tmr_regs, &msw, &lsw);
if (ret)
return ret;
/* Load the "next" event tick value */
writel(lsw + clc, timers.tmr_regs + KONA_GPTIMER_STCM0_OFFSET);

View File

@ -164,7 +164,7 @@ void __init gic_clocksource_init(unsigned int frequency)
gic_start_count();
}
static void __init gic_clocksource_of_init(struct device_node *node)
static int __init gic_clocksource_of_init(struct device_node *node)
{
struct clk *clk;
int ret;

View File

@ -338,7 +338,6 @@ static int __init armada_xp_timer_init(struct device_node *np)
struct clk *clk = of_clk_get_by_name(np, "fixed");
int ret;
clk = of_clk_get(np, 0);
if (IS_ERR(clk)) {
pr_err("Failed to get clock");
return PTR_ERR(clk);

View File

@ -441,6 +441,9 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
OP_ALG_AAI_CTR_MOD128);
const bool is_rfc3686 = alg->caam.rfc3686;
if (!ctx->authsize)
return 0;
/* NULL encryption / decryption */
if (!ctx->enckeylen)
return aead_null_set_sh_desc(aead);
@ -614,7 +617,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
keys_fit_inline = true;
/* aead_givencrypt shared descriptor */
desc = ctx->sh_desc_givenc;
desc = ctx->sh_desc_enc;
/* Note: Context registers are saved. */
init_sh_desc_key_aead(desc, ctx, keys_fit_inline, is_rfc3686);
@ -645,13 +648,13 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
append_operation(desc, ctx->class2_alg_type |
OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT);
/* ivsize + cryptlen = seqoutlen - authsize */
append_math_sub_imm_u32(desc, REG3, SEQOUTLEN, IMM, ctx->authsize);
/* Read and write assoclen bytes */
append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ);
append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ);
/* ivsize + cryptlen = seqoutlen - authsize */
append_math_sub_imm_u32(desc, REG3, SEQOUTLEN, IMM, ctx->authsize);
/* Skip assoc data */
append_seq_fifo_store(desc, 0, FIFOST_TYPE_SKIP | FIFOLDST_VLF);
@ -697,7 +700,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
ctx->sh_desc_enc_dma = dma_map_single(jrdev, desc,
desc_bytes(desc),
DMA_TO_DEVICE);
if (dma_mapping_error(jrdev, ctx->sh_desc_givenc_dma)) {
if (dma_mapping_error(jrdev, ctx->sh_desc_enc_dma)) {
dev_err(jrdev, "unable to map shared descriptor\n");
return -ENOMEM;
}

View File

@ -1898,6 +1898,7 @@ caam_hash_alloc(struct caam_hash_template *template,
template->name);
snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
template->driver_name);
t_alg->ahash_alg.setkey = NULL;
}
alg->cra_module = THIS_MODULE;
alg->cra_init = caam_hash_cra_init;

View File

@ -251,6 +251,14 @@ config EDAC_SBRIDGE
Support for error detection and correction the Intel
Sandy Bridge, Ivy Bridge and Haswell Integrated Memory Controllers.
config EDAC_SKX
tristate "Intel Skylake server Integrated MC"
depends on EDAC_MM_EDAC && PCI && X86_64 && X86_MCE_INTEL
depends on PCI_MMCONFIG
help
Support for error detection and correction the Intel
Skylake server Integrated Memory Controllers.
config EDAC_MPC85XX
tristate "Freescale MPC83xx / MPC85xx"
depends on EDAC_MM_EDAC && FSL_SOC

View File

@ -31,6 +31,7 @@ obj-$(CONFIG_EDAC_I5400) += i5400_edac.o
obj-$(CONFIG_EDAC_I7300) += i7300_edac.o
obj-$(CONFIG_EDAC_I7CORE) += i7core_edac.o
obj-$(CONFIG_EDAC_SBRIDGE) += sb_edac.o
obj-$(CONFIG_EDAC_SKX) += skx_edac.o
obj-$(CONFIG_EDAC_E7XXX) += e7xxx_edac.o
obj-$(CONFIG_EDAC_E752X) += e752x_edac.o
obj-$(CONFIG_EDAC_I82443BXGX) += i82443bxgx_edac.o

1121
drivers/edac/skx_edac.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -646,9 +646,9 @@ int amdgpu_gart_table_vram_pin(struct amdgpu_device *adev);
void amdgpu_gart_table_vram_unpin(struct amdgpu_device *adev);
int amdgpu_gart_init(struct amdgpu_device *adev);
void amdgpu_gart_fini(struct amdgpu_device *adev);
void amdgpu_gart_unbind(struct amdgpu_device *adev, unsigned offset,
void amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
int pages);
int amdgpu_gart_bind(struct amdgpu_device *adev, unsigned offset,
int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
int pages, struct page **pagelist,
dma_addr_t *dma_addr, uint32_t flags);

View File

@ -200,16 +200,7 @@ static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)
atpx->is_hybrid = false;
if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {
printk("ATPX Hybrid Graphics\n");
#if 1
/* This is a temporary hack until the D3 cold support
* makes it upstream. The ATPX power_control method seems
* to still work on even if the system should be using
* the new standardized hybrid D3 cold ACPI interface.
*/
atpx->functions.power_cntl = true;
#else
atpx->functions.power_cntl = false;
#endif
atpx->is_hybrid = true;
}

View File

@ -221,7 +221,7 @@ void amdgpu_gart_table_vram_free(struct amdgpu_device *adev)
* Unbinds the requested pages from the gart page table and
* replaces them with the dummy page (all asics).
*/
void amdgpu_gart_unbind(struct amdgpu_device *adev, unsigned offset,
void amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
int pages)
{
unsigned t;
@ -268,7 +268,7 @@ void amdgpu_gart_unbind(struct amdgpu_device *adev, unsigned offset,
* (all asics).
* Returns 0 for success, -EINVAL for failure.
*/
int amdgpu_gart_bind(struct amdgpu_device *adev, unsigned offset,
int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
int pages, struct page **pagelist, dma_addr_t *dma_addr,
uint32_t flags)
{

View File

@ -1187,7 +1187,8 @@ int amdgpu_uvd_ring_test_ib(struct amdgpu_ring *ring, long timeout)
r = 0;
}
error:
fence_put(fence);
error:
return r;
}

View File

@ -1535,7 +1535,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
r = amd_sched_entity_init(&ring->sched, &vm->entity,
rq, amdgpu_sched_jobs);
if (r)
return r;
goto err;
vm->page_directory_fence = NULL;
@ -1565,6 +1565,9 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm)
error_free_sched_entity:
amd_sched_entity_fini(&ring->sched, &vm->entity);
err:
drm_free_large(vm->page_tables);
return r;
}

View File

@ -184,7 +184,7 @@ u32 __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
sizeof(u32)) + inx;
pr_debug("kfd: get kernel queue doorbell\n"
" doorbell offset == 0x%08d\n"
" doorbell offset == 0x%08X\n"
" kernel address == 0x%08lX\n",
*doorbell_off, (uintptr_t)(kfd->doorbell_kernel_ptr + inx));

View File

@ -464,7 +464,7 @@ static bool drm_fb_helper_is_bound(struct drm_fb_helper *fb_helper)
/* Sometimes user space wants everything disabled, so don't steal the
* display if there's a master. */
if (lockless_dereference(dev->master))
if (READ_ONCE(dev->master))
return false;
drm_for_each_crtc(crtc, dev) {

View File

@ -1333,8 +1333,6 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
if (ret < 0)
return ret;
mutex_lock(&gpu->lock);
/*
* TODO
*
@ -1348,16 +1346,18 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
if (unlikely(event == ~0U)) {
DRM_ERROR("no free event\n");
ret = -EBUSY;
goto out_unlock;
goto out_pm_put;
}
fence = etnaviv_gpu_fence_alloc(gpu);
if (!fence) {
event_free(gpu, event);
ret = -ENOMEM;
goto out_unlock;
goto out_pm_put;
}
mutex_lock(&gpu->lock);
gpu->event[event].fence = fence;
submit->fence = fence->seqno;
gpu->active_fence = submit->fence;
@ -1395,9 +1395,9 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
hangcheck_timer_reset(gpu);
ret = 0;
out_unlock:
mutex_unlock(&gpu->lock);
out_pm_put:
etnaviv_gpu_pm_put(gpu);
return ret;

View File

@ -1854,6 +1854,7 @@ struct drm_i915_private {
enum modeset_restore modeset_restore;
struct mutex modeset_restore_lock;
struct drm_atomic_state *modeset_restore_state;
struct drm_modeset_acquire_ctx reset_ctx;
struct list_head vm_list; /* Global list of all address spaces */
struct i915_ggtt ggtt; /* VM representing the global address space */

View File

@ -879,9 +879,12 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
ret = i915_gem_shmem_pread(dev, obj, args, file);
/* pread for non shmem backed objects */
if (ret == -EFAULT || ret == -ENODEV)
if (ret == -EFAULT || ret == -ENODEV) {
intel_runtime_pm_get(to_i915(dev));
ret = i915_gem_gtt_pread(dev, obj, args->size,
args->offset, args->data_ptr);
intel_runtime_pm_put(to_i915(dev));
}
out:
drm_gem_object_unreference(&obj->base);
@ -1306,7 +1309,7 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
* textures). Fallback to the shmem path in that case. */
}
if (ret == -EFAULT) {
if (ret == -EFAULT || ret == -ENOSPC) {
if (obj->phys_handle)
ret = i915_gem_phys_pwrite(obj, args, file);
else if (i915_gem_object_has_struct_page(obj))
@ -3169,6 +3172,8 @@ static void i915_gem_reset_engine_cleanup(struct intel_engine_cs *engine)
}
intel_ring_init_seqno(engine, engine->last_submitted_seqno);
engine->i915->gt.active_engines &= ~intel_engine_flag(engine);
}
void i915_gem_reset(struct drm_device *dev)
@ -3186,6 +3191,7 @@ void i915_gem_reset(struct drm_device *dev)
for_each_engine(engine, dev_priv)
i915_gem_reset_engine_cleanup(engine);
mod_delayed_work(dev_priv->wq, &dev_priv->gt.idle_work, 0);
i915_gem_context_reset(dev);

View File

@ -2873,6 +2873,7 @@ void i915_ggtt_cleanup_hw(struct drm_device *dev)
struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
ppgtt->base.cleanup(&ppgtt->base);
kfree(ppgtt);
}
i915_gem_cleanup_stolen(dev);

View File

@ -1536,6 +1536,7 @@ enum skl_disp_power_wells {
#define BALANCE_LEG_MASK(port) (7<<(8+3*(port)))
/* Balance leg disable bits */
#define BALANCE_LEG_DISABLE_SHIFT 23
#define BALANCE_LEG_DISABLE(port) (1 << (23 + (port)))
/*
* Fence registers

View File

@ -600,6 +600,8 @@ static void i915_audio_component_codec_wake_override(struct device *dev,
if (!IS_SKYLAKE(dev_priv) && !IS_KABYLAKE(dev_priv))
return;
i915_audio_component_get_power(dev);
/*
* Enable/disable generating the codec wake signal, overriding the
* internal logic to generate the codec wake to controller.
@ -615,6 +617,8 @@ static void i915_audio_component_codec_wake_override(struct device *dev,
I915_WRITE(HSW_AUD_CHICKENBIT, tmp);
usleep_range(1000, 1500);
}
i915_audio_component_put_power(dev);
}
/* Get CDCLK in kHz */
@ -648,6 +652,7 @@ static int i915_audio_component_sync_audio_rate(struct device *dev,
!IS_HASWELL(dev_priv))
return 0;
i915_audio_component_get_power(dev);
mutex_lock(&dev_priv->av_mutex);
/* 1. get the pipe */
intel_encoder = dev_priv->dig_port_map[port];
@ -698,6 +703,7 @@ static int i915_audio_component_sync_audio_rate(struct device *dev,
unlock:
mutex_unlock(&dev_priv->av_mutex);
i915_audio_component_put_power(dev);
return err;
}

View File

@ -145,7 +145,7 @@ static const struct ddi_buf_trans skl_ddi_translations_dp[] = {
static const struct ddi_buf_trans skl_u_ddi_translations_dp[] = {
{ 0x0000201B, 0x000000A2, 0x0 },
{ 0x00005012, 0x00000088, 0x0 },
{ 0x80007011, 0x000000CD, 0x0 },
{ 0x80007011, 0x000000CD, 0x1 },
{ 0x80009010, 0x000000C0, 0x1 },
{ 0x0000201B, 0x0000009D, 0x0 },
{ 0x80005012, 0x000000C0, 0x1 },
@ -158,7 +158,7 @@ static const struct ddi_buf_trans skl_u_ddi_translations_dp[] = {
static const struct ddi_buf_trans skl_y_ddi_translations_dp[] = {
{ 0x00000018, 0x000000A2, 0x0 },
{ 0x00005012, 0x00000088, 0x0 },
{ 0x80007011, 0x000000CD, 0x0 },
{ 0x80007011, 0x000000CD, 0x3 },
{ 0x80009010, 0x000000C0, 0x3 },
{ 0x00000018, 0x0000009D, 0x0 },
{ 0x80005012, 0x000000C0, 0x3 },
@ -388,6 +388,40 @@ skl_get_buf_trans_hdmi(struct drm_i915_private *dev_priv, int *n_entries)
}
}
static int intel_ddi_hdmi_level(struct drm_i915_private *dev_priv, enum port port)
{
int n_hdmi_entries;
int hdmi_level;
int hdmi_default_entry;
hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
if (IS_BROXTON(dev_priv))
return hdmi_level;
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
skl_get_buf_trans_hdmi(dev_priv, &n_hdmi_entries);
hdmi_default_entry = 8;
} else if (IS_BROADWELL(dev_priv)) {
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
} else if (IS_HASWELL(dev_priv)) {
n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);
hdmi_default_entry = 6;
} else {
WARN(1, "ddi translation table missing\n");
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
}
/* Choose a good default if VBT is badly populated */
if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||
hdmi_level >= n_hdmi_entries)
hdmi_level = hdmi_default_entry;
return hdmi_level;
}
/*
* Starting with Haswell, DDI port buffers must be programmed with correct
* values in advance. The buffer values are different for FDI and DP modes,
@ -399,7 +433,7 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 iboost_bit = 0;
int i, n_hdmi_entries, n_dp_entries, n_edp_entries, hdmi_default_entry,
int i, n_hdmi_entries, n_dp_entries, n_edp_entries,
size;
int hdmi_level;
enum port port;
@ -410,7 +444,7 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
const struct ddi_buf_trans *ddi_translations;
port = intel_ddi_get_encoder_port(encoder);
hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
hdmi_level = intel_ddi_hdmi_level(dev_priv, port);
if (IS_BROXTON(dev_priv)) {
if (encoder->type != INTEL_OUTPUT_HDMI)
@ -430,7 +464,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
skl_get_buf_trans_edp(dev_priv, &n_edp_entries);
ddi_translations_hdmi =
skl_get_buf_trans_hdmi(dev_priv, &n_hdmi_entries);
hdmi_default_entry = 8;
/* If we're boosting the current, set bit 31 of trans1 */
if (dev_priv->vbt.ddi_port_info[port].hdmi_boost_level ||
dev_priv->vbt.ddi_port_info[port].dp_boost_level)
@ -456,7 +489,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
} else if (IS_HASWELL(dev_priv)) {
ddi_translations_fdi = hsw_ddi_translations_fdi;
ddi_translations_dp = hsw_ddi_translations_dp;
@ -464,7 +496,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
ddi_translations_hdmi = hsw_ddi_translations_hdmi;
n_dp_entries = n_edp_entries = ARRAY_SIZE(hsw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);
hdmi_default_entry = 6;
} else {
WARN(1, "ddi translation table missing\n");
ddi_translations_edp = bdw_ddi_translations_dp;
@ -474,7 +505,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);
n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
}
switch (encoder->type) {
@ -505,11 +535,6 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
if (encoder->type != INTEL_OUTPUT_HDMI)
return;
/* Choose a good default if VBT is badly populated */
if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||
hdmi_level >= n_hdmi_entries)
hdmi_level = hdmi_default_entry;
/* Entry 9 is for HDMI: */
I915_WRITE(DDI_BUF_TRANS_LO(port, i),
ddi_translations_hdmi[hdmi_level].trans1 | iboost_bit);
@ -1379,14 +1404,30 @@ void intel_ddi_disable_pipe_clock(struct intel_crtc *intel_crtc)
TRANS_CLK_SEL_DISABLED);
}
static void skl_ddi_set_iboost(struct drm_i915_private *dev_priv,
u32 level, enum port port, int type)
static void _skl_ddi_set_iboost(struct drm_i915_private *dev_priv,
enum port port, uint8_t iboost)
{
u32 tmp;
tmp = I915_READ(DISPIO_CR_TX_BMU_CR0);
tmp &= ~(BALANCE_LEG_MASK(port) | BALANCE_LEG_DISABLE(port));
if (iboost)
tmp |= iboost << BALANCE_LEG_SHIFT(port);
else
tmp |= BALANCE_LEG_DISABLE(port);
I915_WRITE(DISPIO_CR_TX_BMU_CR0, tmp);
}
static void skl_ddi_set_iboost(struct intel_encoder *encoder, u32 level)
{
struct intel_digital_port *intel_dig_port = enc_to_dig_port(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(intel_dig_port->base.base.dev);
enum port port = intel_dig_port->port;
int type = encoder->type;
const struct ddi_buf_trans *ddi_translations;
uint8_t iboost;
uint8_t dp_iboost, hdmi_iboost;
int n_entries;
u32 reg;
/* VBT may override standard boost values */
dp_iboost = dev_priv->vbt.ddi_port_info[port].dp_boost_level;
@ -1428,16 +1469,10 @@ static void skl_ddi_set_iboost(struct drm_i915_private *dev_priv,
return;
}
reg = I915_READ(DISPIO_CR_TX_BMU_CR0);
reg &= ~BALANCE_LEG_MASK(port);
reg &= ~(1 << (BALANCE_LEG_DISABLE_SHIFT + port));
_skl_ddi_set_iboost(dev_priv, port, iboost);
if (iboost)
reg |= iboost << BALANCE_LEG_SHIFT(port);
else
reg |= 1 << (BALANCE_LEG_DISABLE_SHIFT + port);
I915_WRITE(DISPIO_CR_TX_BMU_CR0, reg);
if (port == PORT_A && intel_dig_port->max_lanes == 4)
_skl_ddi_set_iboost(dev_priv, PORT_E, iboost);
}
static void bxt_ddi_vswing_sequence(struct drm_i915_private *dev_priv,
@ -1568,7 +1603,7 @@ uint32_t ddi_signal_levels(struct intel_dp *intel_dp)
level = translate_signal_level(signal_levels);
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
skl_ddi_set_iboost(dev_priv, level, port, encoder->type);
skl_ddi_set_iboost(encoder, level);
else if (IS_BROXTON(dev_priv))
bxt_ddi_vswing_sequence(dev_priv, level, port, encoder->type);
@ -1637,6 +1672,10 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder)
intel_dp_stop_link_train(intel_dp);
} else if (type == INTEL_OUTPUT_HDMI) {
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
int level = intel_ddi_hdmi_level(dev_priv, port);
if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
skl_ddi_set_iboost(intel_encoder, level);
intel_hdmi->set_infoframes(encoder,
crtc->config->has_hdmi_sink,

View File

@ -3093,40 +3093,110 @@ static void intel_update_primary_planes(struct drm_device *dev)
for_each_crtc(dev, crtc) {
struct intel_plane *plane = to_intel_plane(crtc->primary);
struct intel_plane_state *plane_state;
drm_modeset_lock_crtc(crtc, &plane->base);
plane_state = to_intel_plane_state(plane->base.state);
struct intel_plane_state *plane_state =
to_intel_plane_state(plane->base.state);
if (plane_state->visible)
plane->update_plane(&plane->base,
to_intel_crtc_state(crtc->state),
plane_state);
drm_modeset_unlock_crtc(crtc);
}
}
static int
__intel_display_resume(struct drm_device *dev,
struct drm_atomic_state *state)
{
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc;
int i, ret;
intel_modeset_setup_hw_state(dev);
i915_redisable_vga(dev);
if (!state)
return 0;
for_each_crtc_in_state(state, crtc, crtc_state, i) {
/*
* Force recalculation even if we restore
* current state. With fast modeset this may not result
* in a modeset when the state is compatible.
*/
crtc_state->mode_changed = true;
}
/* ignore any reset values/BIOS leftovers in the WM registers */
to_intel_atomic_state(state)->skip_intermediate_wm = true;
ret = drm_atomic_commit(state);
WARN_ON(ret == -EDEADLK);
return ret;
}
void intel_prepare_reset(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = &dev_priv->drm;
struct drm_modeset_acquire_ctx *ctx = &dev_priv->reset_ctx;
struct drm_atomic_state *state;
int ret;
/* no reset support for gen2 */
if (IS_GEN2(dev_priv))
return;
/* reset doesn't touch the display */
/*
* Need mode_config.mutex so that we don't
* trample ongoing ->detect() and whatnot.
*/
mutex_lock(&dev->mode_config.mutex);
drm_modeset_acquire_init(ctx, 0);
while (1) {
ret = drm_modeset_lock_all_ctx(dev, ctx);
if (ret != -EDEADLK)
break;
drm_modeset_backoff(ctx);
}
/* reset doesn't touch the display, but flips might get nuked anyway, */
if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv))
return;
drm_modeset_lock_all(&dev_priv->drm);
/*
* Disabling the crtcs gracefully seems nicer. Also the
* g33 docs say we should at least disable all the planes.
*/
intel_display_suspend(&dev_priv->drm);
state = drm_atomic_helper_duplicate_state(dev, ctx);
if (IS_ERR(state)) {
ret = PTR_ERR(state);
state = NULL;
DRM_ERROR("Duplicating state failed with %i\n", ret);
goto err;
}
ret = drm_atomic_helper_disable_all(dev, ctx);
if (ret) {
DRM_ERROR("Suspending crtc's failed with %i\n", ret);
goto err;
}
dev_priv->modeset_restore_state = state;
state->acquire_ctx = ctx;
return;
err:
drm_atomic_state_free(state);
}
void intel_finish_reset(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = &dev_priv->drm;
struct drm_modeset_acquire_ctx *ctx = &dev_priv->reset_ctx;
struct drm_atomic_state *state = dev_priv->modeset_restore_state;
int ret;
/*
* Flips in the rings will be nuked by the reset,
* so complete all pending flips so that user space
@ -3138,6 +3208,8 @@ void intel_finish_reset(struct drm_i915_private *dev_priv)
if (IS_GEN2(dev_priv))
return;
dev_priv->modeset_restore_state = NULL;
/* reset doesn't touch the display */
if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) {
/*
@ -3149,29 +3221,32 @@ void intel_finish_reset(struct drm_i915_private *dev_priv)
* FIXME: Atomic will make this obsolete since we won't schedule
* CS-based flips (which might get lost in gpu resets) any more.
*/
intel_update_primary_planes(&dev_priv->drm);
return;
intel_update_primary_planes(dev);
} else {
/*
* The display has been reset as well,
* so need a full re-initialization.
*/
intel_runtime_pm_disable_interrupts(dev_priv);
intel_runtime_pm_enable_interrupts(dev_priv);
intel_modeset_init_hw(dev);
spin_lock_irq(&dev_priv->irq_lock);
if (dev_priv->display.hpd_irq_setup)
dev_priv->display.hpd_irq_setup(dev_priv);
spin_unlock_irq(&dev_priv->irq_lock);
ret = __intel_display_resume(dev, state);
if (ret)
DRM_ERROR("Restoring old state failed with %i\n", ret);
intel_hpd_init(dev_priv);
}
/*
* The display has been reset as well,
* so need a full re-initialization.
*/
intel_runtime_pm_disable_interrupts(dev_priv);
intel_runtime_pm_enable_interrupts(dev_priv);
intel_modeset_init_hw(&dev_priv->drm);
spin_lock_irq(&dev_priv->irq_lock);
if (dev_priv->display.hpd_irq_setup)
dev_priv->display.hpd_irq_setup(dev_priv);
spin_unlock_irq(&dev_priv->irq_lock);
intel_display_resume(&dev_priv->drm);
intel_hpd_init(dev_priv);
drm_modeset_unlock_all(&dev_priv->drm);
drm_modeset_drop_locks(ctx);
drm_modeset_acquire_fini(ctx);
mutex_unlock(&dev->mode_config.mutex);
}
static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
@ -16156,9 +16231,10 @@ void intel_display_resume(struct drm_device *dev)
struct drm_atomic_state *state = dev_priv->modeset_restore_state;
struct drm_modeset_acquire_ctx ctx;
int ret;
bool setup = false;
dev_priv->modeset_restore_state = NULL;
if (state)
state->acquire_ctx = &ctx;
/*
* This is a cludge because with real atomic modeset mode_config.mutex
@ -16169,43 +16245,17 @@ void intel_display_resume(struct drm_device *dev)
mutex_lock(&dev->mode_config.mutex);
drm_modeset_acquire_init(&ctx, 0);
retry:
ret = drm_modeset_lock_all_ctx(dev, &ctx);
while (1) {
ret = drm_modeset_lock_all_ctx(dev, &ctx);
if (ret != -EDEADLK)
break;
if (ret == 0 && !setup) {
setup = true;
intel_modeset_setup_hw_state(dev);
i915_redisable_vga(dev);
}
if (ret == 0 && state) {
struct drm_crtc_state *crtc_state;
struct drm_crtc *crtc;
int i;
state->acquire_ctx = &ctx;
/* ignore any reset values/BIOS leftovers in the WM registers */
to_intel_atomic_state(state)->skip_intermediate_wm = true;
for_each_crtc_in_state(state, crtc, crtc_state, i) {
/*
* Force recalculation even if we restore
* current state. With fast modeset this may not result
* in a modeset when the state is compatible.
*/
crtc_state->mode_changed = true;
}
ret = drm_atomic_commit(state);
}
if (ret == -EDEADLK) {
drm_modeset_backoff(&ctx);
goto retry;
}
if (!ret)
ret = __intel_display_resume(dev, state);
drm_modeset_drop_locks(&ctx);
drm_modeset_acquire_fini(&ctx);
mutex_unlock(&dev->mode_config.mutex);

View File

@ -1230,12 +1230,29 @@ static int intel_sanitize_fbc_option(struct drm_i915_private *dev_priv)
if (i915.enable_fbc >= 0)
return !!i915.enable_fbc;
if (!HAS_FBC(dev_priv))
return 0;
if (IS_BROADWELL(dev_priv))
return 1;
return 0;
}
static bool need_fbc_vtd_wa(struct drm_i915_private *dev_priv)
{
#ifdef CONFIG_INTEL_IOMMU
/* WaFbcTurnOffFbcWhenHyperVisorIsUsed:skl,bxt */
if (intel_iommu_gfx_mapped &&
(IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv))) {
DRM_INFO("Disabling framebuffer compression (FBC) to prevent screen flicker with VT-d enabled\n");
return true;
}
#endif
return false;
}
/**
* intel_fbc_init - Initialize FBC
* @dev_priv: the i915 device
@ -1253,6 +1270,9 @@ void intel_fbc_init(struct drm_i915_private *dev_priv)
fbc->active = false;
fbc->work.scheduled = false;
if (need_fbc_vtd_wa(dev_priv))
mkwrite_device_info(dev_priv)->has_fbc = false;
i915.enable_fbc = intel_sanitize_fbc_option(dev_priv);
DRM_DEBUG_KMS("Sanitized enable_fbc value: %d\n", i915.enable_fbc);

View File

@ -3344,6 +3344,8 @@ static uint32_t skl_wm_method2(uint32_t pixel_rate, uint32_t pipe_htotal,
plane_bytes_per_line *= 4;
plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
plane_blocks_per_line /= 4;
} else if (tiling == DRM_FORMAT_MOD_NONE) {
plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512) + 1;
} else {
plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);
}
@ -6574,9 +6576,7 @@ void intel_init_gt_powersave(struct drm_i915_private *dev_priv)
void intel_cleanup_gt_powersave(struct drm_i915_private *dev_priv)
{
if (IS_CHERRYVIEW(dev_priv))
return;
else if (IS_VALLEYVIEW(dev_priv))
if (IS_VALLEYVIEW(dev_priv))
valleyview_cleanup_gt_powersave(dev_priv);
if (!i915.enable_rc6)

View File

@ -1178,8 +1178,8 @@ static int bxt_init_workarounds(struct intel_engine_cs *engine)
I915_WRITE(GEN8_L3SQCREG1, L3_GENERAL_PRIO_CREDITS(62) |
L3_HIGH_PRIO_CREDITS(2));
/* WaInsertDummyPushConstPs:bxt */
if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_B0))
/* WaToEnableHwFixForPushConstHWBug:bxt */
if (IS_BXT_REVID(dev_priv, BXT_REVID_C0, REVID_FOREVER))
WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);
@ -1222,8 +1222,8 @@ static int kbl_init_workarounds(struct intel_engine_cs *engine)
I915_WRITE(GEN8_L3SQCREG4, I915_READ(GEN8_L3SQCREG4) |
GEN8_LQSC_RO_PERF_DIS);
/* WaInsertDummyPushConstPs:kbl */
if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0))
/* WaToEnableHwFixForPushConstHWBug:kbl */
if (IS_KBL_REVID(dev_priv, KBL_REVID_C0, REVID_FOREVER))
WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);

View File

@ -2,6 +2,9 @@ config DRM_MEDIATEK
tristate "DRM Support for Mediatek SoCs"
depends on DRM
depends on ARCH_MEDIATEK || (ARM && COMPILE_TEST)
depends on COMMON_CLK
depends on HAVE_ARM_SMCCC
depends on OF
select DRM_GEM_CMA_HELPER
select DRM_KMS_HELPER
select DRM_MIPI_DSI

View File

@ -198,16 +198,7 @@ static int radeon_atpx_validate(struct radeon_atpx *atpx)
atpx->is_hybrid = false;
if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {
printk("ATPX Hybrid Graphics\n");
#if 1
/* This is a temporary hack until the D3 cold support
* makes it upstream. The ATPX power_control method seems
* to still work on even if the system should be using
* the new standardized hybrid D3 cold ACPI interface.
*/
atpx->functions.power_cntl = true;
#else
atpx->functions.power_cntl = false;
#endif
atpx->is_hybrid = true;
}

View File

@ -491,7 +491,7 @@ struct it87_sio_data {
struct it87_data {
const struct attribute_group *groups[7];
enum chips type;
u16 features;
u32 features;
u8 peci_mask;
u8 old_peci_mask;

View File

@ -38,6 +38,7 @@
#define AT91_I2C_TIMEOUT msecs_to_jiffies(100) /* transfer timeout */
#define AT91_I2C_DMA_THRESHOLD 8 /* enable DMA if transfer size is bigger than this threshold */
#define AUTOSUSPEND_TIMEOUT 2000
#define AT91_I2C_MAX_ALT_CMD_DATA_SIZE 256
/* AT91 TWI register definitions */
#define AT91_TWI_CR 0x0000 /* Control Register */
@ -141,6 +142,7 @@ struct at91_twi_dev {
unsigned twi_cwgr_reg;
struct at91_twi_pdata *pdata;
bool use_dma;
bool use_alt_cmd;
bool recv_len_abort;
u32 fifo_size;
struct at91_twi_dma dma;
@ -269,7 +271,7 @@ static void at91_twi_write_next_byte(struct at91_twi_dev *dev)
/* send stop when last byte has been written */
if (--dev->buf_len == 0)
if (!dev->pdata->has_alt_cmd)
if (!dev->use_alt_cmd)
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP);
dev_dbg(dev->dev, "wrote 0x%x, to go %d\n", *dev->buf, dev->buf_len);
@ -292,7 +294,7 @@ static void at91_twi_write_data_dma_callback(void *data)
* we just have to enable TXCOMP one.
*/
at91_twi_write(dev, AT91_TWI_IER, AT91_TWI_TXCOMP);
if (!dev->pdata->has_alt_cmd)
if (!dev->use_alt_cmd)
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP);
}
@ -410,7 +412,7 @@ static void at91_twi_read_next_byte(struct at91_twi_dev *dev)
}
/* send stop if second but last byte has been read */
if (!dev->pdata->has_alt_cmd && dev->buf_len == 1)
if (!dev->use_alt_cmd && dev->buf_len == 1)
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP);
dev_dbg(dev->dev, "read 0x%x, to go %d\n", *dev->buf, dev->buf_len);
@ -426,7 +428,7 @@ static void at91_twi_read_data_dma_callback(void *data)
dma_unmap_single(dev->dev, sg_dma_address(&dev->dma.sg[0]),
dev->buf_len, DMA_FROM_DEVICE);
if (!dev->pdata->has_alt_cmd) {
if (!dev->use_alt_cmd) {
/* The last two bytes have to be read without using dma */
dev->buf += dev->buf_len - 2;
dev->buf_len = 2;
@ -443,7 +445,7 @@ static void at91_twi_read_data_dma(struct at91_twi_dev *dev)
struct dma_chan *chan_rx = dma->chan_rx;
size_t buf_len;
buf_len = (dev->pdata->has_alt_cmd) ? dev->buf_len : dev->buf_len - 2;
buf_len = (dev->use_alt_cmd) ? dev->buf_len : dev->buf_len - 2;
dma->direction = DMA_FROM_DEVICE;
/* Keep in mind that we won't use dma to read the last two bytes */
@ -651,7 +653,7 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev)
unsigned start_flags = AT91_TWI_START;
/* if only one byte is to be read, immediately stop transfer */
if (!has_alt_cmd && dev->buf_len <= 1 &&
if (!dev->use_alt_cmd && dev->buf_len <= 1 &&
!(dev->msg->flags & I2C_M_RECV_LEN))
start_flags |= AT91_TWI_STOP;
at91_twi_write(dev, AT91_TWI_CR, start_flags);
@ -745,7 +747,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
int ret;
unsigned int_addr_flag = 0;
struct i2c_msg *m_start = msg;
bool is_read, use_alt_cmd = false;
bool is_read;
dev_dbg(&adap->dev, "at91_xfer: processing %d messages:\n", num);
@ -768,14 +770,16 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
at91_twi_write(dev, AT91_TWI_IADR, internal_address);
}
dev->use_alt_cmd = false;
is_read = (m_start->flags & I2C_M_RD);
if (dev->pdata->has_alt_cmd) {
if (m_start->len > 0) {
if (m_start->len > 0 &&
m_start->len < AT91_I2C_MAX_ALT_CMD_DATA_SIZE) {
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_ACMEN);
at91_twi_write(dev, AT91_TWI_ACR,
AT91_TWI_ACR_DATAL(m_start->len) |
((is_read) ? AT91_TWI_ACR_DIR : 0));
use_alt_cmd = true;
dev->use_alt_cmd = true;
} else {
at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_ACMDIS);
}
@ -784,7 +788,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num)
at91_twi_write(dev, AT91_TWI_MMR,
(m_start->addr << 16) |
int_addr_flag |
((!use_alt_cmd && is_read) ? AT91_TWI_MREAD : 0));
((!dev->use_alt_cmd && is_read) ? AT91_TWI_MREAD : 0));
dev->buf_len = m_start->len;
dev->buf = m_start->buf;

View File

@ -158,7 +158,7 @@ static irqreturn_t bcm_iproc_i2c_isr(int irq, void *data)
if (status & BIT(IS_M_START_BUSY_SHIFT)) {
iproc_i2c->xfer_is_done = 1;
complete_all(&iproc_i2c->done);
complete(&iproc_i2c->done);
}
writel(status, iproc_i2c->base + IS_OFFSET);

View File

@ -229,7 +229,7 @@ static irqreturn_t bcm_kona_i2c_isr(int irq, void *devid)
dev->base + TXFCR_OFFSET);
writel(status & ~ISR_RESERVED_MASK, dev->base + ISR_OFFSET);
complete_all(&dev->done);
complete(&dev->done);
return IRQ_HANDLED;
}

View File

@ -228,7 +228,7 @@ static irqreturn_t brcmstb_i2c_isr(int irq, void *devid)
return IRQ_NONE;
brcmstb_i2c_enable_disable_irq(dev, INT_DISABLE);
complete_all(&dev->done);
complete(&dev->done);
dev_dbg(dev->device, "isr handled");
return IRQ_HANDLED;

View File

@ -215,7 +215,7 @@ static int ec_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg i2c_msgs[],
msg->outsize = request_len;
msg->insize = response_len;
result = cros_ec_cmd_xfer(bus->ec, msg);
result = cros_ec_cmd_xfer_status(bus->ec, msg);
if (result < 0) {
dev_err(dev, "Error transferring EC i2c message %d\n", result);
goto exit;

View File

@ -211,7 +211,7 @@ static void meson_i2c_stop(struct meson_i2c *i2c)
meson_i2c_add_token(i2c, TOKEN_STOP);
} else {
i2c->state = STATE_IDLE;
complete_all(&i2c->done);
complete(&i2c->done);
}
}
@ -238,7 +238,7 @@ static irqreturn_t meson_i2c_irq(int irqno, void *dev_id)
dev_dbg(i2c->dev, "error bit set\n");
i2c->error = -ENXIO;
i2c->state = STATE_IDLE;
complete_all(&i2c->done);
complete(&i2c->done);
goto out;
}
@ -269,7 +269,7 @@ static irqreturn_t meson_i2c_irq(int irqno, void *dev_id)
break;
case STATE_STOP:
i2c->state = STATE_IDLE;
complete_all(&i2c->done);
complete(&i2c->done);
break;
case STATE_IDLE:
break;

View File

@ -379,6 +379,7 @@ static int ocores_i2c_of_probe(struct platform_device *pdev,
if (!clock_frequency_present) {
dev_err(&pdev->dev,
"Missing required parameter 'opencores,ip-clock-frequency'\n");
clk_disable_unprepare(i2c->clk);
return -ENODEV;
}
i2c->ip_clock_khz = clock_frequency / 1000;
@ -467,20 +468,21 @@ static int ocores_i2c_probe(struct platform_device *pdev)
default:
dev_err(&pdev->dev, "Unsupported I/O width (%d)\n",
i2c->reg_io_width);
return -EINVAL;
ret = -EINVAL;
goto err_clk;
}
}
ret = ocores_init(&pdev->dev, i2c);
if (ret)
return ret;
goto err_clk;
init_waitqueue_head(&i2c->wait);
ret = devm_request_irq(&pdev->dev, irq, ocores_isr, 0,
pdev->name, i2c);
if (ret) {
dev_err(&pdev->dev, "Cannot claim IRQ\n");
return ret;
goto err_clk;
}
/* hook up driver to tree */
@ -494,7 +496,7 @@ static int ocores_i2c_probe(struct platform_device *pdev)
ret = i2c_add_adapter(&i2c->adap);
if (ret) {
dev_err(&pdev->dev, "Failed to add adapter\n");
return ret;
goto err_clk;
}
/* add in known devices to the bus */
@ -504,6 +506,10 @@ static int ocores_i2c_probe(struct platform_device *pdev)
}
return 0;
err_clk:
clk_disable_unprepare(i2c->clk);
return ret;
}
static int ocores_i2c_remove(struct platform_device *pdev)

View File

@ -68,7 +68,7 @@ static int i2c_demux_activate_master(struct i2c_demux_pinctrl_priv *priv, u32 ne
adap = of_find_i2c_adapter_by_node(priv->chan[new_chan].parent_np);
if (!adap) {
ret = -ENODEV;
goto err;
goto err_with_revert;
}
p = devm_pinctrl_get_select(adap->dev.parent, priv->bus_name);
@ -103,6 +103,8 @@ static int i2c_demux_activate_master(struct i2c_demux_pinctrl_priv *priv, u32 ne
err_with_put:
i2c_put_adapter(adap);
err_with_revert:
of_changeset_revert(&priv->chan[new_chan].chgset);
err:
dev_err(priv->dev, "failed to setup demux-adapter %d (%d)\n", new_chan, ret);
return ret;

View File

@ -181,7 +181,7 @@ struct crypt_config {
u8 key[0];
};
#define MIN_IOS 16
#define MIN_IOS 64
static void clone_init(struct dm_crypt_io *, struct bio *);
static void kcryptd_queue_crypt(struct dm_crypt_io *io);

View File

@ -191,7 +191,6 @@ struct raid_dev {
#define RT_FLAG_RS_BITMAP_LOADED 2
#define RT_FLAG_UPDATE_SBS 3
#define RT_FLAG_RESHAPE_RS 4
#define RT_FLAG_KEEP_RS_FROZEN 5
/* Array elements of 64 bit needed for rebuild/failed disk bits */
#define DISKS_ARRAY_ELEMS ((MAX_RAID_DEVICES + (sizeof(uint64_t) * 8 - 1)) / sizeof(uint64_t) / 8)
@ -861,6 +860,9 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size)
{
unsigned long min_region_size = rs->ti->len / (1 << 21);
if (rs_is_raid0(rs))
return 0;
if (!region_size) {
/*
* Choose a reasonable default. All figures in sectors.
@ -930,6 +932,8 @@ static int validate_raid_redundancy(struct raid_set *rs)
rebuild_cnt++;
switch (rs->raid_type->level) {
case 0:
break;
case 1:
if (rebuild_cnt >= rs->md.raid_disks)
goto too_many;
@ -2335,6 +2339,13 @@ static int analyse_superblocks(struct dm_target *ti, struct raid_set *rs)
case 0:
break;
default:
/*
* We have to keep any raid0 data/metadata device pairs or
* the MD raid0 personality will fail to start the array.
*/
if (rs_is_raid0(rs))
continue;
dev = container_of(rdev, struct raid_dev, rdev);
if (dev->meta_dev)
dm_put_device(ti, dev->meta_dev);
@ -2579,7 +2590,6 @@ static int rs_prepare_reshape(struct raid_set *rs)
} else {
/* Process raid1 without delta_disks */
mddev->raid_disks = rs->raid_disks;
set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);
reshape = false;
}
} else {
@ -2590,7 +2600,6 @@ static int rs_prepare_reshape(struct raid_set *rs)
if (reshape) {
set_bit(RT_FLAG_RESHAPE_RS, &rs->runtime_flags);
set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);
} else if (mddev->raid_disks < rs->raid_disks)
/* Create new superblocks and bitmaps, if any new disks */
set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
@ -2902,7 +2911,6 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
goto bad;
set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);
set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);
/* Takeover ain't recovery, so disable recovery */
rs_setup_recovery(rs, MaxSector);
rs_set_new(rs);
@ -3386,21 +3394,28 @@ static void raid_postsuspend(struct dm_target *ti)
{
struct raid_set *rs = ti->private;
if (test_and_clear_bit(RT_FLAG_RS_RESUMED, &rs->runtime_flags)) {
if (!rs->md.suspended)
mddev_suspend(&rs->md);
rs->md.ro = 1;
}
if (!rs->md.suspended)
mddev_suspend(&rs->md);
rs->md.ro = 1;
}
static void attempt_restore_of_faulty_devices(struct raid_set *rs)
{
int i;
uint64_t failed_devices, cleared_failed_devices = 0;
uint64_t cleared_failed_devices[DISKS_ARRAY_ELEMS];
unsigned long flags;
bool cleared = false;
struct dm_raid_superblock *sb;
struct mddev *mddev = &rs->md;
struct md_rdev *r;
/* RAID personalities have to provide hot add/remove methods or we need to bail out. */
if (!mddev->pers || !mddev->pers->hot_add_disk || !mddev->pers->hot_remove_disk)
return;
memset(cleared_failed_devices, 0, sizeof(cleared_failed_devices));
for (i = 0; i < rs->md.raid_disks; i++) {
r = &rs->dev[i].rdev;
if (test_bit(Faulty, &r->flags) && r->sb_page &&
@ -3420,7 +3435,7 @@ static void attempt_restore_of_faulty_devices(struct raid_set *rs)
* ourselves.
*/
if ((r->raid_disk >= 0) &&
(r->mddev->pers->hot_remove_disk(r->mddev, r) != 0))
(mddev->pers->hot_remove_disk(mddev, r) != 0))
/* Failed to revive this device, try next */
continue;
@ -3430,22 +3445,30 @@ static void attempt_restore_of_faulty_devices(struct raid_set *rs)
clear_bit(Faulty, &r->flags);
clear_bit(WriteErrorSeen, &r->flags);
clear_bit(In_sync, &r->flags);
if (r->mddev->pers->hot_add_disk(r->mddev, r)) {
if (mddev->pers->hot_add_disk(mddev, r)) {
r->raid_disk = -1;
r->saved_raid_disk = -1;
r->flags = flags;
} else {
r->recovery_offset = 0;
cleared_failed_devices |= 1 << i;
set_bit(i, (void *) cleared_failed_devices);
cleared = true;
}
}
}
if (cleared_failed_devices) {
/* If any failed devices could be cleared, update all sbs failed_devices bits */
if (cleared) {
uint64_t failed_devices[DISKS_ARRAY_ELEMS];
rdev_for_each(r, &rs->md) {
sb = page_address(r->sb_page);
failed_devices = le64_to_cpu(sb->failed_devices);
failed_devices &= ~cleared_failed_devices;
sb->failed_devices = cpu_to_le64(failed_devices);
sb_retrieve_failed_devices(sb, failed_devices);
for (i = 0; i < DISKS_ARRAY_ELEMS; i++)
failed_devices[i] &= ~cleared_failed_devices[i];
sb_update_failed_devices(sb, failed_devices);
}
}
}
@ -3610,26 +3633,15 @@ static void raid_resume(struct dm_target *ti)
* devices are reachable again.
*/
attempt_restore_of_faulty_devices(rs);
} else {
mddev->ro = 0;
mddev->in_sync = 0;
/*
* When passing in flags to the ctr, we expect userspace
* to reset them because they made it to the superblocks
* and reload the mapping anyway.
*
* -> only unfreeze recovery in case of a table reload or
* we'll have a bogus recovery/reshape position
* retrieved from the superblock by the ctr because
* the ongoing recovery/reshape will change it after read.
*/
if (!test_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags))
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
if (mddev->suspended)
mddev_resume(mddev);
}
mddev->ro = 0;
mddev->in_sync = 0;
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
if (mddev->suspended)
mddev_resume(mddev);
}
static struct target_type raid_target = {

View File

@ -210,14 +210,17 @@ static struct dm_path *rr_select_path(struct path_selector *ps, size_t nr_bytes)
struct path_info *pi = NULL;
struct dm_path *current_path = NULL;
local_irq_save(flags);
current_path = *this_cpu_ptr(s->current_path);
if (current_path) {
percpu_counter_dec(&s->repeat_count);
if (percpu_counter_read_positive(&s->repeat_count) > 0)
if (percpu_counter_read_positive(&s->repeat_count) > 0) {
local_irq_restore(flags);
return current_path;
}
}
spin_lock_irqsave(&s->lock, flags);
spin_lock(&s->lock);
if (!list_empty(&s->valid_paths)) {
pi = list_entry(s->valid_paths.next, struct path_info, list);
list_move_tail(&pi->list, &s->valid_paths);

View File

@ -152,7 +152,7 @@ module_param(lacp_rate, charp, 0);
MODULE_PARM_DESC(lacp_rate, "LACPDU tx rate to request from 802.3ad partner; "
"0 for slow, 1 for fast");
module_param(ad_select, charp, 0);
MODULE_PARM_DESC(ad_select, "803.ad aggregation selection logic; "
MODULE_PARM_DESC(ad_select, "802.3ad aggregation selection logic; "
"0 for stable (default), 1 for bandwidth, "
"2 for count");
module_param(min_links, int, 0);

View File

@ -258,7 +258,7 @@
* BCM5325 and BCM5365 share most definitions below
*/
#define B53_ARLTBL_MAC_VID_ENTRY(n) (0x10 * (n))
#define ARLTBL_MAC_MASK 0xffffffffffff
#define ARLTBL_MAC_MASK 0xffffffffffffULL
#define ARLTBL_VID_S 48
#define ARLTBL_VID_MASK_25 0xff
#define ARLTBL_VID_MASK 0xfff

View File

@ -3187,6 +3187,7 @@ static int mv88e6xxx_set_addr(struct dsa_switch *ds, u8 *addr)
return err;
}
#ifdef CONFIG_NET_DSA_HWMON
static int mv88e6xxx_mdio_page_read(struct dsa_switch *ds, int port, int page,
int reg)
{
@ -3212,6 +3213,7 @@ static int mv88e6xxx_mdio_page_write(struct dsa_switch *ds, int port, int page,
return ret;
}
#endif
static int mv88e6xxx_port_to_mdio_addr(struct mv88e6xxx_chip *chip, int port)
{

View File

@ -793,6 +793,8 @@ int xgene_enet_phy_connect(struct net_device *ndev)
netdev_err(ndev, "Could not connect to PHY\n");
return -ENODEV;
}
#else
return -ENODEV;
#endif
}

View File

@ -771,8 +771,10 @@ int arc_emac_probe(struct net_device *ndev, int interface)
priv->dev = dev;
priv->regs = devm_ioremap_resource(dev, &res_regs);
if (IS_ERR(priv->regs))
return PTR_ERR(priv->regs);
if (IS_ERR(priv->regs)) {
err = PTR_ERR(priv->regs);
goto out_put_node;
}
dev_dbg(dev, "Registers base address is 0x%p\n", priv->regs);

View File

@ -12552,10 +12552,6 @@ static int tg3_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info,
info->data = TG3_RSS_MAX_NUM_QS;
}
/* The first interrupt vector only
* handles link interrupts.
*/
info->data -= 1;
return 0;
default:
@ -14014,6 +14010,7 @@ static int tg3_set_coalesce(struct net_device *dev, struct ethtool_coalesce *ec)
}
if ((ec->rx_coalesce_usecs > MAX_RXCOL_TICKS) ||
(!ec->rx_coalesce_usecs) ||
(ec->tx_coalesce_usecs > MAX_TXCOL_TICKS) ||
(ec->rx_max_coalesced_frames > MAX_RXMAX_FRAMES) ||
(ec->tx_max_coalesced_frames > MAX_TXMAX_FRAMES) ||

View File

@ -403,11 +403,11 @@
#define MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII 0x00000004
#define MACB_CAPS_NO_GIGABIT_HALF 0x00000008
#define MACB_CAPS_USRIO_DISABLED 0x00000010
#define MACB_CAPS_JUMBO 0x00000020
#define MACB_CAPS_FIFO_MODE 0x10000000
#define MACB_CAPS_GIGABIT_MODE_AVAILABLE 0x20000000
#define MACB_CAPS_SG_DISABLED 0x40000000
#define MACB_CAPS_MACB_IS_GEM 0x80000000
#define MACB_CAPS_JUMBO 0x00000010
/* Bit manipulation macros */
#define MACB_BIT(name) \

View File

@ -1299,6 +1299,7 @@ static int
dm9000_open(struct net_device *dev)
{
struct board_info *db = netdev_priv(dev);
unsigned int irq_flags = irq_get_trigger_type(dev->irq);
if (netif_msg_ifup(db))
dev_dbg(db->dev, "enabling %s\n", dev->name);
@ -1306,9 +1307,11 @@ dm9000_open(struct net_device *dev)
/* If there is no IRQ type specified, tell the user that this is a
* problem
*/
if (irq_get_trigger_type(dev->irq) == IRQF_TRIGGER_NONE)
if (irq_flags == IRQF_TRIGGER_NONE)
dev_warn(db->dev, "WARNING: no IRQ resource flags set.\n");
irq_flags |= IRQF_SHARED;
/* GPIO0 on pre-activate PHY, Reg 1F is not set by reset */
iow(db, DM9000_GPR, 0); /* REG_1F bit0 activate phyxcer */
mdelay(1); /* delay needs by DM9000B */
@ -1316,8 +1319,7 @@ dm9000_open(struct net_device *dev)
/* Initialize DM9000 board */
dm9000_init_dm9000(dev);
if (request_irq(dev->irq, dm9000_interrupt, IRQF_SHARED,
dev->name, dev))
if (request_irq(dev->irq, dm9000_interrupt, irq_flags, dev->name, dev))
return -EAGAIN;
/* Now that we have an interrupt handler hooked up we can unmask
* our interrupts

View File

@ -17,7 +17,7 @@ static const struct mac_stats_string g_gmac_stats_string[] = {
{"gmac_rx_octets_total_ok", MAC_STATS_FIELD_OFF(rx_good_bytes)},
{"gmac_rx_octets_bad", MAC_STATS_FIELD_OFF(rx_bad_bytes)},
{"gmac_rx_uc_pkts", MAC_STATS_FIELD_OFF(rx_uc_pkts)},
{"gamc_rx_mc_pkts", MAC_STATS_FIELD_OFF(rx_mc_pkts)},
{"gmac_rx_mc_pkts", MAC_STATS_FIELD_OFF(rx_mc_pkts)},
{"gmac_rx_bc_pkts", MAC_STATS_FIELD_OFF(rx_bc_pkts)},
{"gmac_rx_pkts_64octets", MAC_STATS_FIELD_OFF(rx_64bytes)},
{"gmac_rx_pkts_65to127", MAC_STATS_FIELD_OFF(rx_65to127)},

View File

@ -2032,7 +2032,8 @@ const struct e1000_info e1000_82574_info = {
| FLAG2_DISABLE_ASPM_L0S
| FLAG2_DISABLE_ASPM_L1
| FLAG2_NO_DISABLE_RX
| FLAG2_DMA_BURST,
| FLAG2_DMA_BURST
| FLAG2_CHECK_SYSTIM_OVERFLOW,
.pba = 32,
.max_hw_frame_size = DEFAULT_JUMBO,
.get_variants = e1000_get_variants_82571,
@ -2053,7 +2054,8 @@ const struct e1000_info e1000_82583_info = {
| FLAG_HAS_CTRLEXT_ON_LOAD,
.flags2 = FLAG2_DISABLE_ASPM_L0S
| FLAG2_DISABLE_ASPM_L1
| FLAG2_NO_DISABLE_RX,
| FLAG2_NO_DISABLE_RX
| FLAG2_CHECK_SYSTIM_OVERFLOW,
.pba = 32,
.max_hw_frame_size = DEFAULT_JUMBO,
.get_variants = e1000_get_variants_82571,

View File

@ -452,6 +452,7 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca);
#define FLAG2_PCIM2PCI_ARBITER_WA BIT(11)
#define FLAG2_DFLT_CRC_STRIPPING BIT(12)
#define FLAG2_CHECK_RX_HWTSTAMP BIT(13)
#define FLAG2_CHECK_SYSTIM_OVERFLOW BIT(14)
#define E1000_RX_DESC_PS(R, i) \
(&(((union e1000_rx_desc_packet_split *)((R).desc))[i]))

View File

@ -5885,7 +5885,8 @@ const struct e1000_info e1000_pch_lpt_info = {
| FLAG_HAS_JUMBO_FRAMES
| FLAG_APME_IN_WUC,
.flags2 = FLAG2_HAS_PHY_STATS
| FLAG2_HAS_EEE,
| FLAG2_HAS_EEE
| FLAG2_CHECK_SYSTIM_OVERFLOW,
.pba = 26,
.max_hw_frame_size = 9022,
.get_variants = e1000_get_variants_ich8lan,

View File

@ -4302,6 +4302,42 @@ void e1000e_reinit_locked(struct e1000_adapter *adapter)
clear_bit(__E1000_RESETTING, &adapter->state);
}
/**
* e1000e_sanitize_systim - sanitize raw cycle counter reads
* @hw: pointer to the HW structure
* @systim: cycle_t value read, sanitized and returned
*
* Errata for 82574/82583 possible bad bits read from SYSTIMH/L:
* check to see that the time is incrementing at a reasonable
* rate and is a multiple of incvalue.
**/
static cycle_t e1000e_sanitize_systim(struct e1000_hw *hw, cycle_t systim)
{
u64 time_delta, rem, temp;
cycle_t systim_next;
u32 incvalue;
int i;
incvalue = er32(TIMINCA) & E1000_TIMINCA_INCVALUE_MASK;
for (i = 0; i < E1000_MAX_82574_SYSTIM_REREADS; i++) {
/* latch SYSTIMH on read of SYSTIML */
systim_next = (cycle_t)er32(SYSTIML);
systim_next |= (cycle_t)er32(SYSTIMH) << 32;
time_delta = systim_next - systim;
temp = time_delta;
/* VMWare users have seen incvalue of zero, don't div / 0 */
rem = incvalue ? do_div(temp, incvalue) : (time_delta != 0);
systim = systim_next;
if ((time_delta < E1000_82574_SYSTIM_EPSILON) && (rem == 0))
break;
}
return systim;
}
/**
* e1000e_cyclecounter_read - read raw cycle counter (used by time counter)
* @cc: cyclecounter structure
@ -4312,7 +4348,7 @@ static cycle_t e1000e_cyclecounter_read(const struct cyclecounter *cc)
cc);
struct e1000_hw *hw = &adapter->hw;
u32 systimel, systimeh;
cycle_t systim, systim_next;
cycle_t systim;
/* SYSTIMH latching upon SYSTIML read does not work well.
* This means that if SYSTIML overflows after we read it but before
* we read SYSTIMH, the value of SYSTIMH has been incremented and we
@ -4335,33 +4371,9 @@ static cycle_t e1000e_cyclecounter_read(const struct cyclecounter *cc)
systim = (cycle_t)systimel;
systim |= (cycle_t)systimeh << 32;
if ((hw->mac.type == e1000_82574) || (hw->mac.type == e1000_82583)) {
u64 time_delta, rem, temp;
u32 incvalue;
int i;
if (adapter->flags2 & FLAG2_CHECK_SYSTIM_OVERFLOW)
systim = e1000e_sanitize_systim(hw, systim);
/* errata for 82574/82583 possible bad bits read from SYSTIMH/L
* check to see that the time is incrementing at a reasonable
* rate and is a multiple of incvalue
*/
incvalue = er32(TIMINCA) & E1000_TIMINCA_INCVALUE_MASK;
for (i = 0; i < E1000_MAX_82574_SYSTIM_REREADS; i++) {
/* latch SYSTIMH on read of SYSTIML */
systim_next = (cycle_t)er32(SYSTIML);
systim_next |= (cycle_t)er32(SYSTIMH) << 32;
time_delta = systim_next - systim;
temp = time_delta;
/* VMWare users have seen incvalue of zero, don't div / 0 */
rem = incvalue ? do_div(temp, incvalue) : (time_delta != 0);
systim = systim_next;
if ((time_delta < E1000_82574_SYSTIM_EPSILON) &&
(rem == 0))
break;
}
}
return systim;
}

View File

@ -4554,23 +4554,38 @@ static u8 i40e_get_iscsi_tc_map(struct i40e_pf *pf)
**/
static u8 i40e_dcb_get_num_tc(struct i40e_dcbx_config *dcbcfg)
{
int i, tc_unused = 0;
u8 num_tc = 0;
int i;
u8 ret = 0;
/* Scan the ETS Config Priority Table to find
* traffic class enabled for a given priority
* and use the traffic class index to get the
* number of traffic classes enabled
* and create a bitmask of enabled TCs
*/
for (i = 0; i < I40E_MAX_USER_PRIORITY; i++) {
if (dcbcfg->etscfg.prioritytable[i] > num_tc)
num_tc = dcbcfg->etscfg.prioritytable[i];
for (i = 0; i < I40E_MAX_USER_PRIORITY; i++)
num_tc |= BIT(dcbcfg->etscfg.prioritytable[i]);
/* Now scan the bitmask to check for
* contiguous TCs starting with TC0
*/
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
if (num_tc & BIT(i)) {
if (!tc_unused) {
ret++;
} else {
pr_err("Non-contiguous TC - Disabling DCB\n");
return 1;
}
} else {
tc_unused = 1;
}
}
/* Traffic class index starts from zero so
* increment to return the actual count
*/
return num_tc + 1;
/* There is always at least TC0 */
if (!ret)
ret = 1;
return ret;
}
/**

View File

@ -744,7 +744,8 @@ static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter)
}
}
shhwtstamps.hwtstamp = ktime_sub_ns(shhwtstamps.hwtstamp, adjust);
shhwtstamps.hwtstamp =
ktime_add_ns(shhwtstamps.hwtstamp, adjust);
skb_tstamp_tx(adapter->ptp_tx_skb, &shhwtstamps);
dev_kfree_skb_any(adapter->ptp_tx_skb);
@ -767,13 +768,32 @@ void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector,
struct sk_buff *skb)
{
__le64 *regval = (__le64 *)va;
struct igb_adapter *adapter = q_vector->adapter;
int adjust = 0;
/* The timestamp is recorded in little endian format.
* DWORD: 0 1 2 3
* Field: Reserved Reserved SYSTIML SYSTIMH
*/
igb_ptp_systim_to_hwtstamp(q_vector->adapter, skb_hwtstamps(skb),
igb_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb),
le64_to_cpu(regval[1]));
/* adjust timestamp for the RX latency based on link speed */
if (adapter->hw.mac.type == e1000_i210) {
switch (adapter->link_speed) {
case SPEED_10:
adjust = IGB_I210_RX_LATENCY_10;
break;
case SPEED_100:
adjust = IGB_I210_RX_LATENCY_100;
break;
case SPEED_1000:
adjust = IGB_I210_RX_LATENCY_1000;
break;
}
}
skb_hwtstamps(skb)->hwtstamp =
ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust);
}
/**
@ -825,7 +845,7 @@ void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector,
}
}
skb_hwtstamps(skb)->hwtstamp =
ktime_add_ns(skb_hwtstamps(skb)->hwtstamp, adjust);
ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust);
/* Update the last_rx_timestamp timer in order to enable watchdog check
* for error case of latched timestamp on a dropped packet.

View File

@ -4100,6 +4100,8 @@ static void ixgbe_vlan_promisc_enable(struct ixgbe_adapter *adapter)
struct ixgbe_hw *hw = &adapter->hw;
u32 vlnctrl, i;
vlnctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
switch (hw->mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
@ -4112,8 +4114,7 @@ static void ixgbe_vlan_promisc_enable(struct ixgbe_adapter *adapter)
/* fall through */
case ixgbe_mac_82598EB:
/* legacy case, we can just disable VLAN filtering */
vlnctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
vlnctrl &= ~(IXGBE_VLNCTRL_VFE | IXGBE_VLNCTRL_CFIEN);
vlnctrl &= ~IXGBE_VLNCTRL_VFE;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlnctrl);
return;
}
@ -4125,6 +4126,10 @@ static void ixgbe_vlan_promisc_enable(struct ixgbe_adapter *adapter)
/* Set flag so we don't redo unnecessary work */
adapter->flags2 |= IXGBE_FLAG2_VLAN_PROMISC;
/* For VMDq and SR-IOV we must leave VLAN filtering enabled */
vlnctrl |= IXGBE_VLNCTRL_VFE;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlnctrl);
/* Add PF to all active pools */
for (i = IXGBE_VLVF_ENTRIES; --i;) {
u32 reg_offset = IXGBE_VLVFB(i * 2 + VMDQ_P(0) / 32);
@ -4191,6 +4196,11 @@ static void ixgbe_vlan_promisc_disable(struct ixgbe_adapter *adapter)
struct ixgbe_hw *hw = &adapter->hw;
u32 vlnctrl, i;
/* Set VLAN filtering to enabled */
vlnctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
vlnctrl |= IXGBE_VLNCTRL_VFE;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlnctrl);
switch (hw->mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
@ -4202,10 +4212,6 @@ static void ixgbe_vlan_promisc_disable(struct ixgbe_adapter *adapter)
break;
/* fall through */
case ixgbe_mac_82598EB:
vlnctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
vlnctrl &= ~IXGBE_VLNCTRL_CFIEN;
vlnctrl |= IXGBE_VLNCTRL_VFE;
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlnctrl);
return;
}
@ -8390,12 +8396,14 @@ static int parse_tc_actions(struct ixgbe_adapter *adapter,
struct tcf_exts *exts, u64 *action, u8 *queue)
{
const struct tc_action *a;
LIST_HEAD(actions);
int err;
if (tc_no_actions(exts))
return -EINVAL;
tc_for_each_action(a, exts) {
tcf_exts_to_list(exts, &actions);
list_for_each_entry(a, &actions, list) {
/* Drop action */
if (is_tcf_gact_shot(a)) {
@ -9517,6 +9525,7 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* copy netdev features into list of user selectable features */
netdev->hw_features |= netdev->features |
NETIF_F_HW_VLAN_CTAG_FILTER |
NETIF_F_HW_VLAN_CTAG_RX |
NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_RXALL |

View File

@ -245,12 +245,16 @@ static int mtk_phy_connect(struct mtk_mac *mac)
case PHY_INTERFACE_MODE_MII:
ge_mode = 1;
break;
case PHY_INTERFACE_MODE_RMII:
case PHY_INTERFACE_MODE_REVMII:
ge_mode = 2;
break;
case PHY_INTERFACE_MODE_RMII:
if (!mac->id)
goto err_phy;
ge_mode = 3;
break;
default:
dev_err(eth->dev, "invalid phy_mode\n");
return -1;
goto err_phy;
}
/* put the gmac into the right mode */
@ -263,13 +267,25 @@ static int mtk_phy_connect(struct mtk_mac *mac)
mac->phy_dev->autoneg = AUTONEG_ENABLE;
mac->phy_dev->speed = 0;
mac->phy_dev->duplex = 0;
if (of_phy_is_fixed_link(mac->of_node))
mac->phy_dev->supported |=
SUPPORTED_Pause | SUPPORTED_Asym_Pause;
mac->phy_dev->supported &= PHY_GBIT_FEATURES | SUPPORTED_Pause |
SUPPORTED_Asym_Pause;
mac->phy_dev->advertising = mac->phy_dev->supported |
ADVERTISED_Autoneg;
phy_start_aneg(mac->phy_dev);
of_node_put(np);
return 0;
err_phy:
of_node_put(np);
dev_err(eth->dev, "invalid phy_mode\n");
return -EINVAL;
}
static int mtk_mdio_init(struct mtk_eth *eth)
@ -542,15 +558,15 @@ static inline struct mtk_tx_buf *mtk_desc_to_tx_buf(struct mtk_tx_ring *ring,
return &ring->buf[idx];
}
static void mtk_tx_unmap(struct device *dev, struct mtk_tx_buf *tx_buf)
static void mtk_tx_unmap(struct mtk_eth *eth, struct mtk_tx_buf *tx_buf)
{
if (tx_buf->flags & MTK_TX_FLAGS_SINGLE0) {
dma_unmap_single(dev,
dma_unmap_single(eth->dev,
dma_unmap_addr(tx_buf, dma_addr0),
dma_unmap_len(tx_buf, dma_len0),
DMA_TO_DEVICE);
} else if (tx_buf->flags & MTK_TX_FLAGS_PAGE0) {
dma_unmap_page(dev,
dma_unmap_page(eth->dev,
dma_unmap_addr(tx_buf, dma_addr0),
dma_unmap_len(tx_buf, dma_len0),
DMA_TO_DEVICE);
@ -595,9 +611,9 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev,
if (skb_vlan_tag_present(skb))
txd4 |= TX_DMA_INS_VLAN | skb_vlan_tag_get(skb);
mapped_addr = dma_map_single(&dev->dev, skb->data,
mapped_addr = dma_map_single(eth->dev, skb->data,
skb_headlen(skb), DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(&dev->dev, mapped_addr)))
if (unlikely(dma_mapping_error(eth->dev, mapped_addr)))
return -ENOMEM;
WRITE_ONCE(itxd->txd1, mapped_addr);
@ -623,10 +639,10 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev,
n_desc++;
frag_map_size = min(frag_size, MTK_TX_DMA_BUF_LEN);
mapped_addr = skb_frag_dma_map(&dev->dev, frag, offset,
mapped_addr = skb_frag_dma_map(eth->dev, frag, offset,
frag_map_size,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(&dev->dev, mapped_addr)))
if (unlikely(dma_mapping_error(eth->dev, mapped_addr)))
goto err_dma;
if (i == nr_frags - 1 &&
@ -679,7 +695,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev,
tx_buf = mtk_desc_to_tx_buf(ring, itxd);
/* unmap dma */
mtk_tx_unmap(&dev->dev, tx_buf);
mtk_tx_unmap(eth, tx_buf);
itxd->txd3 = TX_DMA_LS0 | TX_DMA_OWNER_CPU;
itxd = mtk_qdma_phys_to_virt(ring, itxd->txd2);
@ -836,11 +852,11 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
netdev->stats.rx_dropped++;
goto release_desc;
}
dma_addr = dma_map_single(&eth->netdev[mac]->dev,
dma_addr = dma_map_single(eth->dev,
new_data + NET_SKB_PAD,
ring->buf_size,
DMA_FROM_DEVICE);
if (unlikely(dma_mapping_error(&netdev->dev, dma_addr))) {
if (unlikely(dma_mapping_error(eth->dev, dma_addr))) {
skb_free_frag(new_data);
netdev->stats.rx_dropped++;
goto release_desc;
@ -855,7 +871,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
}
skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
dma_unmap_single(&netdev->dev, trxd.rxd1,
dma_unmap_single(eth->dev, trxd.rxd1,
ring->buf_size, DMA_FROM_DEVICE);
pktlen = RX_DMA_GET_PLEN0(trxd.rxd2);
skb->dev = netdev;
@ -937,7 +953,7 @@ static int mtk_poll_tx(struct mtk_eth *eth, int budget)
done[mac]++;
budget--;
}
mtk_tx_unmap(eth->dev, tx_buf);
mtk_tx_unmap(eth, tx_buf);
ring->last_free = desc;
atomic_inc(&ring->free_count);
@ -1092,7 +1108,7 @@ static void mtk_tx_clean(struct mtk_eth *eth)
if (ring->buf) {
for (i = 0; i < MTK_DMA_SIZE; i++)
mtk_tx_unmap(eth->dev, &ring->buf[i]);
mtk_tx_unmap(eth, &ring->buf[i]);
kfree(ring->buf);
ring->buf = NULL;
}
@ -1751,6 +1767,7 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
goto free_netdev;
}
spin_lock_init(&mac->hw_stats->stats_lock);
u64_stats_init(&mac->hw_stats->syncp);
mac->hw_stats->reg_offset = id * MTK_STAT_OFFSET;
SET_NETDEV_DEV(eth->netdev[id], eth->dev);

View File

@ -318,6 +318,7 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
u32 *action, u32 *flow_tag)
{
const struct tc_action *a;
LIST_HEAD(actions);
if (tc_no_actions(exts))
return -EINVAL;
@ -325,7 +326,8 @@ static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
*flow_tag = MLX5_FS_DEFAULT_FLOW_TAG;
*action = 0;
tc_for_each_action(a, exts) {
tcf_exts_to_list(exts, &actions);
list_for_each_entry(a, &actions, list) {
/* Only support a single action per rule */
if (*action)
return -EINVAL;
@ -362,13 +364,15 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
u32 *action, u32 *dest_vport)
{
const struct tc_action *a;
LIST_HEAD(actions);
if (tc_no_actions(exts))
return -EINVAL;
*action = 0;
tc_for_each_action(a, exts) {
tcf_exts_to_list(exts, &actions);
list_for_each_entry(a, &actions, list) {
/* Only support a single action per rule */
if (*action)
return -EINVAL;
@ -503,6 +507,7 @@ int mlx5e_stats_flower(struct mlx5e_priv *priv,
struct mlx5e_tc_flow *flow;
struct tc_action *a;
struct mlx5_fc *counter;
LIST_HEAD(actions);
u64 bytes;
u64 packets;
u64 lastuse;
@ -518,7 +523,8 @@ int mlx5e_stats_flower(struct mlx5e_priv *priv,
mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse);
tc_for_each_action(a, f->exts)
tcf_exts_to_list(f->exts, &actions);
list_for_each_entry(a, &actions, list)
tcf_action_stats_update(a, bytes, packets, lastuse);
return 0;

View File

@ -3383,6 +3383,15 @@ MLXSW_ITEM32(reg, ritr, ipv4_fe, 0x04, 29, 1);
*/
MLXSW_ITEM32(reg, ritr, ipv6_fe, 0x04, 28, 1);
/* reg_ritr_lb_en
* Loop-back filter enable for unicast packets.
* If the flag is set then loop-back filter for unicast packets is
* implemented on the RIF. Multicast packets are always subject to
* loop-back filtering.
* Access: RW
*/
MLXSW_ITEM32(reg, ritr, lb_en, 0x04, 24, 1);
/* reg_ritr_virtual_router
* Virtual router ID associated with the router interface.
* Access: RW
@ -3484,6 +3493,7 @@ static inline void mlxsw_reg_ritr_pack(char *payload, bool enable,
mlxsw_reg_ritr_op_set(payload, op);
mlxsw_reg_ritr_rif_set(payload, rif);
mlxsw_reg_ritr_ipv4_fe_set(payload, 1);
mlxsw_reg_ritr_lb_en_set(payload, 1);
mlxsw_reg_ritr_mtu_set(payload, mtu);
mlxsw_reg_ritr_if_mac_memcpy_to(payload, mac);
}
@ -4000,6 +4010,7 @@ static inline void mlxsw_reg_ralue_pack(char *payload,
{
MLXSW_REG_ZERO(ralue, payload);
mlxsw_reg_ralue_protocol_set(payload, protocol);
mlxsw_reg_ralue_op_set(payload, op);
mlxsw_reg_ralue_virtual_router_set(payload, virtual_router);
mlxsw_reg_ralue_prefix_len_set(payload, prefix_len);
mlxsw_reg_ralue_entry_type_set(payload,

View File

@ -942,8 +942,8 @@ static void mlxsw_sp_port_vport_destroy(struct mlxsw_sp_port *mlxsw_sp_vport)
kfree(mlxsw_sp_vport);
}
int mlxsw_sp_port_add_vid(struct net_device *dev, __be16 __always_unused proto,
u16 vid)
static int mlxsw_sp_port_add_vid(struct net_device *dev,
__be16 __always_unused proto, u16 vid)
{
struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
struct mlxsw_sp_port *mlxsw_sp_vport;
@ -956,16 +956,12 @@ int mlxsw_sp_port_add_vid(struct net_device *dev, __be16 __always_unused proto,
if (!vid)
return 0;
if (mlxsw_sp_port_vport_find(mlxsw_sp_port, vid)) {
netdev_warn(dev, "VID=%d already configured\n", vid);
if (mlxsw_sp_port_vport_find(mlxsw_sp_port, vid))
return 0;
}
mlxsw_sp_vport = mlxsw_sp_port_vport_create(mlxsw_sp_port, vid);
if (!mlxsw_sp_vport) {
netdev_err(dev, "Failed to create vPort for VID=%d\n", vid);
if (!mlxsw_sp_vport)
return -ENOMEM;
}
/* When adding the first VLAN interface on a bridged port we need to
* transition all the active 802.1Q bridge VLANs to use explicit
@ -973,24 +969,17 @@ int mlxsw_sp_port_add_vid(struct net_device *dev, __be16 __always_unused proto,
*/
if (list_is_singular(&mlxsw_sp_port->vports_list)) {
err = mlxsw_sp_port_vp_mode_trans(mlxsw_sp_port);
if (err) {
netdev_err(dev, "Failed to set to Virtual mode\n");
if (err)
goto err_port_vp_mode_trans;
}
}
err = mlxsw_sp_port_vid_learning_set(mlxsw_sp_vport, vid, false);
if (err) {
netdev_err(dev, "Failed to disable learning for VID=%d\n", vid);
if (err)
goto err_port_vid_learning_set;
}
err = mlxsw_sp_port_vlan_set(mlxsw_sp_vport, vid, vid, true, untagged);
if (err) {
netdev_err(dev, "Failed to set VLAN membership for VID=%d\n",
vid);
if (err)
goto err_port_add_vid;
}
return 0;
@ -1010,7 +999,6 @@ static int mlxsw_sp_port_kill_vid(struct net_device *dev,
struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
struct mlxsw_sp_port *mlxsw_sp_vport;
struct mlxsw_sp_fid *f;
int err;
/* VLAN 0 is removed from HW filter when device goes down, but
* it is reserved in our case, so simply return.
@ -1019,23 +1007,12 @@ static int mlxsw_sp_port_kill_vid(struct net_device *dev,
return 0;
mlxsw_sp_vport = mlxsw_sp_port_vport_find(mlxsw_sp_port, vid);
if (!mlxsw_sp_vport) {
netdev_warn(dev, "VID=%d does not exist\n", vid);
if (WARN_ON(!mlxsw_sp_vport))
return 0;
}
err = mlxsw_sp_port_vlan_set(mlxsw_sp_vport, vid, vid, false, false);
if (err) {
netdev_err(dev, "Failed to set VLAN membership for VID=%d\n",
vid);
return err;
}
mlxsw_sp_port_vlan_set(mlxsw_sp_vport, vid, vid, false, false);
err = mlxsw_sp_port_vid_learning_set(mlxsw_sp_vport, vid, true);
if (err) {
netdev_err(dev, "Failed to enable learning for VID=%d\n", vid);
return err;
}
mlxsw_sp_port_vid_learning_set(mlxsw_sp_vport, vid, true);
/* Drop FID reference. If this was the last reference the
* resources will be freed.
@ -1048,13 +1025,8 @@ static int mlxsw_sp_port_kill_vid(struct net_device *dev,
* transition all active 802.1Q bridge VLANs to use VID to FID
* mappings and set port's mode to VLAN mode.
*/
if (list_is_singular(&mlxsw_sp_port->vports_list)) {
err = mlxsw_sp_port_vlan_mode_trans(mlxsw_sp_port);
if (err) {
netdev_err(dev, "Failed to set to VLAN mode\n");
return err;
}
}
if (list_is_singular(&mlxsw_sp_port->vports_list))
mlxsw_sp_port_vlan_mode_trans(mlxsw_sp_port);
mlxsw_sp_port_vport_destroy(mlxsw_sp_vport);
@ -1149,6 +1121,7 @@ static int mlxsw_sp_port_add_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
bool ingress)
{
const struct tc_action *a;
LIST_HEAD(actions);
int err;
if (!tc_single_action(cls->exts)) {
@ -1156,7 +1129,8 @@ static int mlxsw_sp_port_add_cls_matchall(struct mlxsw_sp_port *mlxsw_sp_port,
return -ENOTSUPP;
}
tc_for_each_action(a, cls->exts) {
tcf_exts_to_list(cls->exts, &actions);
list_for_each_entry(a, &actions, list) {
if (!is_tcf_mirred_mirror(a) || protocol != htons(ETH_P_ALL))
return -ENOTSUPP;
@ -2076,6 +2050,18 @@ static int mlxsw_sp_port_ets_init(struct mlxsw_sp_port *mlxsw_sp_port)
return 0;
}
static int mlxsw_sp_port_pvid_vport_create(struct mlxsw_sp_port *mlxsw_sp_port)
{
mlxsw_sp_port->pvid = 1;
return mlxsw_sp_port_add_vid(mlxsw_sp_port->dev, 0, 1);
}
static int mlxsw_sp_port_pvid_vport_destroy(struct mlxsw_sp_port *mlxsw_sp_port)
{
return mlxsw_sp_port_kill_vid(mlxsw_sp_port->dev, 0, 1);
}
static int mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
bool split, u8 module, u8 width, u8 lane)
{
@ -2191,7 +2177,15 @@ static int mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
goto err_port_dcb_init;
}
err = mlxsw_sp_port_pvid_vport_create(mlxsw_sp_port);
if (err) {
dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to create PVID vPort\n",
mlxsw_sp_port->local_port);
goto err_port_pvid_vport_create;
}
mlxsw_sp_port_switchdev_init(mlxsw_sp_port);
mlxsw_sp->ports[local_port] = mlxsw_sp_port;
err = register_netdev(dev);
if (err) {
dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to register netdev\n",
@ -2208,24 +2202,23 @@ static int mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
goto err_core_port_init;
}
err = mlxsw_sp_port_vlan_init(mlxsw_sp_port);
if (err)
goto err_port_vlan_init;
mlxsw_sp->ports[local_port] = mlxsw_sp_port;
return 0;
err_port_vlan_init:
mlxsw_core_port_fini(&mlxsw_sp_port->core_port);
err_core_port_init:
unregister_netdev(dev);
err_register_netdev:
mlxsw_sp->ports[local_port] = NULL;
mlxsw_sp_port_switchdev_fini(mlxsw_sp_port);
mlxsw_sp_port_pvid_vport_destroy(mlxsw_sp_port);
err_port_pvid_vport_create:
mlxsw_sp_port_dcb_fini(mlxsw_sp_port);
err_port_dcb_init:
err_port_ets_init:
err_port_buffers_init:
err_port_admin_status_set:
err_port_mtu_set:
err_port_speed_by_width_set:
mlxsw_sp_port_swid_set(mlxsw_sp_port, MLXSW_PORT_SWID_DISABLED_PORT);
err_port_swid_set:
err_port_system_port_mapping_set:
err_dev_addr_init:
@ -2245,12 +2238,12 @@ static void mlxsw_sp_port_remove(struct mlxsw_sp *mlxsw_sp, u8 local_port)
if (!mlxsw_sp_port)
return;
mlxsw_sp->ports[local_port] = NULL;
mlxsw_core_port_fini(&mlxsw_sp_port->core_port);
unregister_netdev(mlxsw_sp_port->dev); /* This calls ndo_stop */
mlxsw_sp_port_dcb_fini(mlxsw_sp_port);
mlxsw_sp_port_kill_vid(mlxsw_sp_port->dev, 0, 1);
mlxsw_sp->ports[local_port] = NULL;
mlxsw_sp_port_switchdev_fini(mlxsw_sp_port);
mlxsw_sp_port_pvid_vport_destroy(mlxsw_sp_port);
mlxsw_sp_port_dcb_fini(mlxsw_sp_port);
mlxsw_sp_port_swid_set(mlxsw_sp_port, MLXSW_PORT_SWID_DISABLED_PORT);
mlxsw_sp_port_module_unmap(mlxsw_sp, mlxsw_sp_port->local_port);
free_percpu(mlxsw_sp_port->pcpu_stats);
@ -2659,6 +2652,26 @@ static const struct mlxsw_rx_listener mlxsw_sp_rx_listener[] = {
.local_port = MLXSW_PORT_DONT_CARE,
.trap_id = MLXSW_TRAP_ID_ARPUC,
},
{
.func = mlxsw_sp_rx_listener_func,
.local_port = MLXSW_PORT_DONT_CARE,
.trap_id = MLXSW_TRAP_ID_MTUERROR,
},
{
.func = mlxsw_sp_rx_listener_func,
.local_port = MLXSW_PORT_DONT_CARE,
.trap_id = MLXSW_TRAP_ID_TTLERROR,
},
{
.func = mlxsw_sp_rx_listener_func,
.local_port = MLXSW_PORT_DONT_CARE,
.trap_id = MLXSW_TRAP_ID_LBERROR,
},
{
.func = mlxsw_sp_rx_listener_func,
.local_port = MLXSW_PORT_DONT_CARE,
.trap_id = MLXSW_TRAP_ID_OSPF,
},
{
.func = mlxsw_sp_rx_listener_func,
.local_port = MLXSW_PORT_DONT_CARE,

View File

@ -536,8 +536,6 @@ int mlxsw_sp_port_vid_to_fid_set(struct mlxsw_sp_port *mlxsw_sp_port,
u16 vid);
int mlxsw_sp_port_vlan_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid_begin,
u16 vid_end, bool is_member, bool untagged);
int mlxsw_sp_port_add_vid(struct net_device *dev, __be16 __always_unused proto,
u16 vid);
int mlxsw_sp_vport_flood_set(struct mlxsw_sp_port *mlxsw_sp_vport, u16 fid,
bool set);
void mlxsw_sp_port_active_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port);

View File

@ -330,7 +330,7 @@ static const struct mlxsw_sp_sb_cm mlxsw_sp_cpu_port_sb_cms[] = {
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(10000), 0, 0),
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,

View File

@ -341,6 +341,8 @@ static int mlxsw_sp_port_pfc_set(struct mlxsw_sp_port *mlxsw_sp_port,
char pfcc_pl[MLXSW_REG_PFCC_LEN];
mlxsw_reg_pfcc_pack(pfcc_pl, mlxsw_sp_port->local_port);
mlxsw_reg_pfcc_pprx_set(pfcc_pl, mlxsw_sp_port->link.rx_pause);
mlxsw_reg_pfcc_pptx_set(pfcc_pl, mlxsw_sp_port->link.tx_pause);
mlxsw_reg_pfcc_prio_pack(pfcc_pl, pfc->pfc_en);
return mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core, MLXSW_REG(pfcc),
@ -351,17 +353,17 @@ static int mlxsw_sp_dcbnl_ieee_setpfc(struct net_device *dev,
struct ieee_pfc *pfc)
{
struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
bool pause_en = mlxsw_sp_port_is_pause_en(mlxsw_sp_port);
int err;
if ((mlxsw_sp_port->link.tx_pause || mlxsw_sp_port->link.rx_pause) &&
pfc->pfc_en) {
if (pause_en && pfc->pfc_en) {
netdev_err(dev, "PAUSE frames already enabled on port\n");
return -EINVAL;
}
err = __mlxsw_sp_port_headroom_set(mlxsw_sp_port, dev->mtu,
mlxsw_sp_port->dcb.ets->prio_tc,
false, pfc);
pause_en, pfc);
if (err) {
netdev_err(dev, "Failed to configure port's headroom for PFC\n");
return err;
@ -380,7 +382,7 @@ static int mlxsw_sp_dcbnl_ieee_setpfc(struct net_device *dev,
err_port_pfc_set:
__mlxsw_sp_port_headroom_set(mlxsw_sp_port, dev->mtu,
mlxsw_sp_port->dcb.ets->prio_tc, false,
mlxsw_sp_port->dcb.ets->prio_tc, pause_en,
mlxsw_sp_port->dcb.pfc);
return err;
}

View File

@ -1651,9 +1651,10 @@ static void mlxsw_sp_router_fib4_add_info_destroy(void const *data)
const struct mlxsw_sp_router_fib4_add_info *info = data;
struct mlxsw_sp_fib_entry *fib_entry = info->fib_entry;
struct mlxsw_sp *mlxsw_sp = info->mlxsw_sp;
struct mlxsw_sp_vr *vr = fib_entry->vr;
mlxsw_sp_fib_entry_destroy(fib_entry);
mlxsw_sp_vr_put(mlxsw_sp, fib_entry->vr);
mlxsw_sp_vr_put(mlxsw_sp, vr);
kfree(info);
}

View File

@ -450,6 +450,8 @@ void mlxsw_sp_fid_destroy(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fid *f)
kfree(f);
mlxsw_sp_fid_map(mlxsw_sp, fid, false);
mlxsw_sp_fid_op(mlxsw_sp, fid, false);
}
@ -997,13 +999,13 @@ static int mlxsw_sp_port_obj_add(struct net_device *dev,
}
static int __mlxsw_sp_port_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port,
u16 vid_begin, u16 vid_end, bool init)
u16 vid_begin, u16 vid_end)
{
struct net_device *dev = mlxsw_sp_port->dev;
u16 vid, pvid;
int err;
if (!init && !mlxsw_sp_port->bridged)
if (!mlxsw_sp_port->bridged)
return -EINVAL;
err = __mlxsw_sp_port_vlans_set(mlxsw_sp_port, vid_begin, vid_end,
@ -1014,9 +1016,6 @@ static int __mlxsw_sp_port_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port,
return err;
}
if (init)
goto out;
pvid = mlxsw_sp_port->pvid;
if (pvid >= vid_begin && pvid <= vid_end) {
err = mlxsw_sp_port_pvid_set(mlxsw_sp_port, 0);
@ -1028,7 +1027,6 @@ static int __mlxsw_sp_port_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port,
mlxsw_sp_port_fid_leave(mlxsw_sp_port, vid_begin, vid_end);
out:
/* Changing activity bits only if HW operation succeded */
for (vid = vid_begin; vid <= vid_end; vid++)
clear_bit(vid, mlxsw_sp_port->active_vlans);
@ -1039,8 +1037,8 @@ static int __mlxsw_sp_port_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port,
static int mlxsw_sp_port_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port,
const struct switchdev_obj_port_vlan *vlan)
{
return __mlxsw_sp_port_vlans_del(mlxsw_sp_port,
vlan->vid_begin, vlan->vid_end, false);
return __mlxsw_sp_port_vlans_del(mlxsw_sp_port, vlan->vid_begin,
vlan->vid_end);
}
void mlxsw_sp_port_active_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port)
@ -1048,7 +1046,7 @@ void mlxsw_sp_port_active_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port)
u16 vid;
for_each_set_bit(vid, mlxsw_sp_port->active_vlans, VLAN_N_VID)
__mlxsw_sp_port_vlans_del(mlxsw_sp_port, vid, vid, false);
__mlxsw_sp_port_vlans_del(mlxsw_sp_port, vid, vid);
}
static int
@ -1546,32 +1544,6 @@ void mlxsw_sp_switchdev_fini(struct mlxsw_sp *mlxsw_sp)
mlxsw_sp_fdb_fini(mlxsw_sp);
}
int mlxsw_sp_port_vlan_init(struct mlxsw_sp_port *mlxsw_sp_port)
{
struct net_device *dev = mlxsw_sp_port->dev;
int err;
/* Allow only untagged packets to ingress and tag them internally
* with VID 1.
*/
mlxsw_sp_port->pvid = 1;
err = __mlxsw_sp_port_vlans_del(mlxsw_sp_port, 0, VLAN_N_VID - 1,
true);
if (err) {
netdev_err(dev, "Unable to init VLANs\n");
return err;
}
/* Add implicit VLAN interface in the device, so that untagged
* packets will be classified to the default vFID.
*/
err = mlxsw_sp_port_add_vid(dev, 0, 1);
if (err)
netdev_err(dev, "Failed to configure default vFID\n");
return err;
}
void mlxsw_sp_port_switchdev_init(struct mlxsw_sp_port *mlxsw_sp_port)
{
mlxsw_sp_port->dev->switchdev_ops = &mlxsw_sp_port_switchdev_ops;

View File

@ -56,6 +56,10 @@ enum {
MLXSW_TRAP_ID_IGMP_V3_REPORT = 0x34,
MLXSW_TRAP_ID_ARPBC = 0x50,
MLXSW_TRAP_ID_ARPUC = 0x51,
MLXSW_TRAP_ID_MTUERROR = 0x52,
MLXSW_TRAP_ID_TTLERROR = 0x53,
MLXSW_TRAP_ID_LBERROR = 0x54,
MLXSW_TRAP_ID_OSPF = 0x55,
MLXSW_TRAP_ID_IP2ME = 0x5F,
MLXSW_TRAP_ID_RTR_INGRESS0 = 0x70,
MLXSW_TRAP_ID_HOST_MISS_IPV4 = 0x90,

View File

@ -52,40 +52,94 @@ static bool qed_dcbx_app_ethtype(u32 app_info_bitmap)
DCBX_APP_SF_ETHTYPE);
}
static bool qed_dcbx_ieee_app_ethtype(u32 app_info_bitmap)
{
u8 mfw_val = QED_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF_IEEE);
/* Old MFW */
if (mfw_val == DCBX_APP_SF_IEEE_RESERVED)
return qed_dcbx_app_ethtype(app_info_bitmap);
return !!(mfw_val == DCBX_APP_SF_IEEE_ETHTYPE);
}
static bool qed_dcbx_app_port(u32 app_info_bitmap)
{
return !!(QED_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF) ==
DCBX_APP_SF_PORT);
}
static bool qed_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id)
static bool qed_dcbx_ieee_app_port(u32 app_info_bitmap, u8 type)
{
return !!(qed_dcbx_app_ethtype(app_info_bitmap) &&
proto_id == QED_ETH_TYPE_DEFAULT);
u8 mfw_val = QED_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF_IEEE);
/* Old MFW */
if (mfw_val == DCBX_APP_SF_IEEE_RESERVED)
return qed_dcbx_app_port(app_info_bitmap);
return !!(mfw_val == type || mfw_val == DCBX_APP_SF_IEEE_TCP_UDP_PORT);
}
static bool qed_dcbx_iscsi_tlv(u32 app_info_bitmap, u16 proto_id)
static bool qed_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
{
return !!(qed_dcbx_app_port(app_info_bitmap) &&
proto_id == QED_TCP_PORT_ISCSI);
bool ethtype;
if (ieee)
ethtype = qed_dcbx_ieee_app_ethtype(app_info_bitmap);
else
ethtype = qed_dcbx_app_ethtype(app_info_bitmap);
return !!(ethtype && (proto_id == QED_ETH_TYPE_DEFAULT));
}
static bool qed_dcbx_fcoe_tlv(u32 app_info_bitmap, u16 proto_id)
static bool qed_dcbx_iscsi_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
{
return !!(qed_dcbx_app_ethtype(app_info_bitmap) &&
proto_id == QED_ETH_TYPE_FCOE);
bool port;
if (ieee)
port = qed_dcbx_ieee_app_port(app_info_bitmap,
DCBX_APP_SF_IEEE_TCP_PORT);
else
port = qed_dcbx_app_port(app_info_bitmap);
return !!(port && (proto_id == QED_TCP_PORT_ISCSI));
}
static bool qed_dcbx_roce_tlv(u32 app_info_bitmap, u16 proto_id)
static bool qed_dcbx_fcoe_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
{
return !!(qed_dcbx_app_ethtype(app_info_bitmap) &&
proto_id == QED_ETH_TYPE_ROCE);
bool ethtype;
if (ieee)
ethtype = qed_dcbx_ieee_app_ethtype(app_info_bitmap);
else
ethtype = qed_dcbx_app_ethtype(app_info_bitmap);
return !!(ethtype && (proto_id == QED_ETH_TYPE_FCOE));
}
static bool qed_dcbx_roce_v2_tlv(u32 app_info_bitmap, u16 proto_id)
static bool qed_dcbx_roce_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
{
return !!(qed_dcbx_app_port(app_info_bitmap) &&
proto_id == QED_UDP_PORT_TYPE_ROCE_V2);
bool ethtype;
if (ieee)
ethtype = qed_dcbx_ieee_app_ethtype(app_info_bitmap);
else
ethtype = qed_dcbx_app_ethtype(app_info_bitmap);
return !!(ethtype && (proto_id == QED_ETH_TYPE_ROCE));
}
static bool qed_dcbx_roce_v2_tlv(u32 app_info_bitmap, u16 proto_id, bool ieee)
{
bool port;
if (ieee)
port = qed_dcbx_ieee_app_port(app_info_bitmap,
DCBX_APP_SF_IEEE_UDP_PORT);
else
port = qed_dcbx_app_port(app_info_bitmap);
return !!(port && (proto_id == QED_UDP_PORT_TYPE_ROCE_V2));
}
static void
@ -164,17 +218,17 @@ qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
static bool
qed_dcbx_get_app_protocol_type(struct qed_hwfn *p_hwfn,
u32 app_prio_bitmap,
u16 id, enum dcbx_protocol_type *type)
u16 id, enum dcbx_protocol_type *type, bool ieee)
{
if (qed_dcbx_fcoe_tlv(app_prio_bitmap, id)) {
if (qed_dcbx_fcoe_tlv(app_prio_bitmap, id, ieee)) {
*type = DCBX_PROTOCOL_FCOE;
} else if (qed_dcbx_roce_tlv(app_prio_bitmap, id)) {
} else if (qed_dcbx_roce_tlv(app_prio_bitmap, id, ieee)) {
*type = DCBX_PROTOCOL_ROCE;
} else if (qed_dcbx_iscsi_tlv(app_prio_bitmap, id)) {
} else if (qed_dcbx_iscsi_tlv(app_prio_bitmap, id, ieee)) {
*type = DCBX_PROTOCOL_ISCSI;
} else if (qed_dcbx_default_tlv(app_prio_bitmap, id)) {
} else if (qed_dcbx_default_tlv(app_prio_bitmap, id, ieee)) {
*type = DCBX_PROTOCOL_ETH;
} else if (qed_dcbx_roce_v2_tlv(app_prio_bitmap, id)) {
} else if (qed_dcbx_roce_v2_tlv(app_prio_bitmap, id, ieee)) {
*type = DCBX_PROTOCOL_ROCE_V2;
} else {
*type = DCBX_MAX_PROTOCOL_TYPE;
@ -194,17 +248,18 @@ static int
qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn,
struct qed_dcbx_results *p_data,
struct dcbx_app_priority_entry *p_tbl,
u32 pri_tc_tbl, int count, bool dcbx_enabled)
u32 pri_tc_tbl, int count, u8 dcbx_version)
{
u8 tc, priority_map;
enum dcbx_protocol_type type;
bool enable, ieee;
u16 protocol_id;
int priority;
bool enable;
int i;
DP_VERBOSE(p_hwfn, QED_MSG_DCB, "Num APP entries = %d\n", count);
ieee = (dcbx_version == DCBX_CONFIG_VERSION_IEEE);
/* Parse APP TLV */
for (i = 0; i < count; i++) {
protocol_id = QED_MFW_GET_FIELD(p_tbl[i].entry,
@ -219,7 +274,7 @@ qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn,
tc = QED_DCBX_PRIO2TC(pri_tc_tbl, priority);
if (qed_dcbx_get_app_protocol_type(p_hwfn, p_tbl[i].entry,
protocol_id, &type)) {
protocol_id, &type, ieee)) {
/* ETH always have the enable bit reset, as it gets
* vlan information per packet. For other protocols,
* should be set according to the dcbx_enabled
@ -275,15 +330,12 @@ static int qed_dcbx_process_mib_info(struct qed_hwfn *p_hwfn)
struct dcbx_ets_feature *p_ets;
struct qed_hw_info *p_info;
u32 pri_tc_tbl, flags;
bool dcbx_enabled;
u8 dcbx_version;
int num_entries;
int rc = 0;
/* If DCBx version is non zero, then negotiation was
* successfuly performed
*/
flags = p_hwfn->p_dcbx_info->operational.flags;
dcbx_enabled = !!QED_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION);
dcbx_version = QED_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION);
p_app = &p_hwfn->p_dcbx_info->operational.features.app;
p_tbl = p_app->app_pri_tbl;
@ -295,13 +347,13 @@ static int qed_dcbx_process_mib_info(struct qed_hwfn *p_hwfn)
num_entries = QED_MFW_GET_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);
rc = qed_dcbx_process_tlv(p_hwfn, &data, p_tbl, pri_tc_tbl,
num_entries, dcbx_enabled);
num_entries, dcbx_version);
if (rc)
return rc;
p_info->num_tc = QED_MFW_GET_FIELD(p_ets->flags, DCBX_ETS_MAX_TCS);
data.pf_id = p_hwfn->rel_pf_id;
data.dcbx_enabled = dcbx_enabled;
data.dcbx_enabled = !!dcbx_version;
qed_dcbx_dp_protocol(p_hwfn, &data);
@ -400,7 +452,7 @@ static void
qed_dcbx_get_app_data(struct qed_hwfn *p_hwfn,
struct dcbx_app_priority_feature *p_app,
struct dcbx_app_priority_entry *p_tbl,
struct qed_dcbx_params *p_params)
struct qed_dcbx_params *p_params, bool ieee)
{
struct qed_app_entry *entry;
u8 pri_map;
@ -414,15 +466,46 @@ qed_dcbx_get_app_data(struct qed_hwfn *p_hwfn,
DCBX_APP_NUM_ENTRIES);
for (i = 0; i < DCBX_MAX_APP_PROTOCOL; i++) {
entry = &p_params->app_entry[i];
entry->ethtype = !(QED_MFW_GET_FIELD(p_tbl[i].entry,
DCBX_APP_SF));
if (ieee) {
u8 sf_ieee;
u32 val;
sf_ieee = QED_MFW_GET_FIELD(p_tbl[i].entry,
DCBX_APP_SF_IEEE);
switch (sf_ieee) {
case DCBX_APP_SF_IEEE_RESERVED:
/* Old MFW */
val = QED_MFW_GET_FIELD(p_tbl[i].entry,
DCBX_APP_SF);
entry->sf_ieee = val ?
QED_DCBX_SF_IEEE_TCP_UDP_PORT :
QED_DCBX_SF_IEEE_ETHTYPE;
break;
case DCBX_APP_SF_IEEE_ETHTYPE:
entry->sf_ieee = QED_DCBX_SF_IEEE_ETHTYPE;
break;
case DCBX_APP_SF_IEEE_TCP_PORT:
entry->sf_ieee = QED_DCBX_SF_IEEE_TCP_PORT;
break;
case DCBX_APP_SF_IEEE_UDP_PORT:
entry->sf_ieee = QED_DCBX_SF_IEEE_UDP_PORT;
break;
case DCBX_APP_SF_IEEE_TCP_UDP_PORT:
entry->sf_ieee = QED_DCBX_SF_IEEE_TCP_UDP_PORT;
break;
}
} else {
entry->ethtype = !(QED_MFW_GET_FIELD(p_tbl[i].entry,
DCBX_APP_SF));
}
pri_map = QED_MFW_GET_FIELD(p_tbl[i].entry, DCBX_APP_PRI_MAP);
entry->prio = ffs(pri_map) - 1;
entry->proto_id = QED_MFW_GET_FIELD(p_tbl[i].entry,
DCBX_APP_PROTOCOL_ID);
qed_dcbx_get_app_protocol_type(p_hwfn, p_tbl[i].entry,
entry->proto_id,
&entry->proto_type);
&entry->proto_type, ieee);
}
DP_VERBOSE(p_hwfn, QED_MSG_DCB,
@ -483,7 +566,7 @@ qed_dcbx_get_ets_data(struct qed_hwfn *p_hwfn,
bw_map[1] = be32_to_cpu(p_ets->tc_bw_tbl[1]);
tsa_map[0] = be32_to_cpu(p_ets->tc_tsa_tbl[0]);
tsa_map[1] = be32_to_cpu(p_ets->tc_tsa_tbl[1]);
pri_map = be32_to_cpu(p_ets->pri_tc_tbl[0]);
pri_map = p_ets->pri_tc_tbl[0];
for (i = 0; i < QED_MAX_PFC_PRIORITIES; i++) {
p_params->ets_tc_bw_tbl[i] = ((u8 *)bw_map)[i];
p_params->ets_tc_tsa_tbl[i] = ((u8 *)tsa_map)[i];
@ -500,9 +583,9 @@ qed_dcbx_get_common_params(struct qed_hwfn *p_hwfn,
struct dcbx_app_priority_feature *p_app,
struct dcbx_app_priority_entry *p_tbl,
struct dcbx_ets_feature *p_ets,
u32 pfc, struct qed_dcbx_params *p_params)
u32 pfc, struct qed_dcbx_params *p_params, bool ieee)
{
qed_dcbx_get_app_data(p_hwfn, p_app, p_tbl, p_params);
qed_dcbx_get_app_data(p_hwfn, p_app, p_tbl, p_params, ieee);
qed_dcbx_get_ets_data(p_hwfn, p_ets, p_params);
qed_dcbx_get_pfc_data(p_hwfn, pfc, p_params);
}
@ -516,7 +599,7 @@ qed_dcbx_get_local_params(struct qed_hwfn *p_hwfn,
p_feat = &p_hwfn->p_dcbx_info->local_admin.features;
qed_dcbx_get_common_params(p_hwfn, &p_feat->app,
p_feat->app.app_pri_tbl, &p_feat->ets,
p_feat->pfc, &params->local.params);
p_feat->pfc, &params->local.params, false);
params->local.valid = true;
}
@ -529,7 +612,7 @@ qed_dcbx_get_remote_params(struct qed_hwfn *p_hwfn,
p_feat = &p_hwfn->p_dcbx_info->remote.features;
qed_dcbx_get_common_params(p_hwfn, &p_feat->app,
p_feat->app.app_pri_tbl, &p_feat->ets,
p_feat->pfc, &params->remote.params);
p_feat->pfc, &params->remote.params, false);
params->remote.valid = true;
}
@ -574,7 +657,8 @@ qed_dcbx_get_operational_params(struct qed_hwfn *p_hwfn,
qed_dcbx_get_common_params(p_hwfn, &p_feat->app,
p_feat->app.app_pri_tbl, &p_feat->ets,
p_feat->pfc, &params->operational.params);
p_feat->pfc, &params->operational.params,
p_operational->ieee);
qed_dcbx_get_priority_info(p_hwfn, &p_operational->app_prio, p_results);
err = QED_MFW_GET_FIELD(p_feat->app.flags, DCBX_APP_ERROR);
p_operational->err = err;
@ -944,7 +1028,6 @@ qed_dcbx_set_ets_data(struct qed_hwfn *p_hwfn,
val = (((u32)p_params->ets_pri_tc_tbl[i]) << ((7 - i) * 4));
p_ets->pri_tc_tbl[0] |= val;
}
p_ets->pri_tc_tbl[0] = cpu_to_be32(p_ets->pri_tc_tbl[0]);
for (i = 0; i < 2; i++) {
p_ets->tc_bw_tbl[i] = cpu_to_be32(p_ets->tc_bw_tbl[i]);
p_ets->tc_tsa_tbl[i] = cpu_to_be32(p_ets->tc_tsa_tbl[i]);
@ -954,7 +1037,7 @@ qed_dcbx_set_ets_data(struct qed_hwfn *p_hwfn,
static void
qed_dcbx_set_app_data(struct qed_hwfn *p_hwfn,
struct dcbx_app_priority_feature *p_app,
struct qed_dcbx_params *p_params)
struct qed_dcbx_params *p_params, bool ieee)
{
u32 *entry;
int i;
@ -975,12 +1058,36 @@ qed_dcbx_set_app_data(struct qed_hwfn *p_hwfn,
for (i = 0; i < DCBX_MAX_APP_PROTOCOL; i++) {
entry = &p_app->app_pri_tbl[i].entry;
*entry &= ~DCBX_APP_SF_MASK;
if (p_params->app_entry[i].ethtype)
*entry |= ((u32)DCBX_APP_SF_ETHTYPE <<
DCBX_APP_SF_SHIFT);
else
*entry |= ((u32)DCBX_APP_SF_PORT << DCBX_APP_SF_SHIFT);
if (ieee) {
*entry &= ~DCBX_APP_SF_IEEE_MASK;
switch (p_params->app_entry[i].sf_ieee) {
case QED_DCBX_SF_IEEE_ETHTYPE:
*entry |= ((u32)DCBX_APP_SF_IEEE_ETHTYPE <<
DCBX_APP_SF_IEEE_SHIFT);
break;
case QED_DCBX_SF_IEEE_TCP_PORT:
*entry |= ((u32)DCBX_APP_SF_IEEE_TCP_PORT <<
DCBX_APP_SF_IEEE_SHIFT);
break;
case QED_DCBX_SF_IEEE_UDP_PORT:
*entry |= ((u32)DCBX_APP_SF_IEEE_UDP_PORT <<
DCBX_APP_SF_IEEE_SHIFT);
break;
case QED_DCBX_SF_IEEE_TCP_UDP_PORT:
*entry |= ((u32)DCBX_APP_SF_IEEE_TCP_UDP_PORT <<
DCBX_APP_SF_IEEE_SHIFT);
break;
}
} else {
*entry &= ~DCBX_APP_SF_MASK;
if (p_params->app_entry[i].ethtype)
*entry |= ((u32)DCBX_APP_SF_ETHTYPE <<
DCBX_APP_SF_SHIFT);
else
*entry |= ((u32)DCBX_APP_SF_PORT <<
DCBX_APP_SF_SHIFT);
}
*entry &= ~DCBX_APP_PROTOCOL_ID_MASK;
*entry |= ((u32)p_params->app_entry[i].proto_id <<
DCBX_APP_PROTOCOL_ID_SHIFT);
@ -995,15 +1102,19 @@ qed_dcbx_set_local_params(struct qed_hwfn *p_hwfn,
struct dcbx_local_params *local_admin,
struct qed_dcbx_set *params)
{
bool ieee = false;
local_admin->flags = 0;
memcpy(&local_admin->features,
&p_hwfn->p_dcbx_info->operational.features,
sizeof(local_admin->features));
if (params->enabled)
if (params->enabled) {
local_admin->config = params->ver_num;
else
ieee = !!(params->ver_num & DCBX_CONFIG_VERSION_IEEE);
} else {
local_admin->config = DCBX_CONFIG_VERSION_DISABLED;
}
if (params->override_flags & QED_DCBX_OVERRIDE_PFC_CFG)
qed_dcbx_set_pfc_data(p_hwfn, &local_admin->features.pfc,
@ -1015,7 +1126,7 @@ qed_dcbx_set_local_params(struct qed_hwfn *p_hwfn,
if (params->override_flags & QED_DCBX_OVERRIDE_APP_CFG)
qed_dcbx_set_app_data(p_hwfn, &local_admin->features.app,
&params->config.params);
&params->config.params, ieee);
}
int qed_dcbx_config_params(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
@ -1596,8 +1707,10 @@ static int qed_dcbnl_setapp(struct qed_dev *cdev,
if ((entry->ethtype == ethtype) && (entry->proto_id == idval))
break;
/* First empty slot */
if (!entry->proto_id)
if (!entry->proto_id) {
dcbx_set.config.params.num_app_entries++;
break;
}
}
if (i == QED_DCBX_MAX_APP_PROTOCOL) {
@ -2117,8 +2230,10 @@ int qed_dcbnl_ieee_setapp(struct qed_dev *cdev, struct dcb_app *app)
(entry->proto_id == app->protocol))
break;
/* First empty slot */
if (!entry->proto_id)
if (!entry->proto_id) {
dcbx_set.config.params.num_app_entries++;
break;
}
}
if (i == QED_DCBX_MAX_APP_PROTOCOL) {

View File

@ -6850,6 +6850,14 @@ struct dcbx_app_priority_entry {
#define DCBX_APP_SF_SHIFT 8
#define DCBX_APP_SF_ETHTYPE 0
#define DCBX_APP_SF_PORT 1
#define DCBX_APP_SF_IEEE_MASK 0x0000f000
#define DCBX_APP_SF_IEEE_SHIFT 12
#define DCBX_APP_SF_IEEE_RESERVED 0
#define DCBX_APP_SF_IEEE_ETHTYPE 1
#define DCBX_APP_SF_IEEE_TCP_PORT 2
#define DCBX_APP_SF_IEEE_UDP_PORT 3
#define DCBX_APP_SF_IEEE_TCP_UDP_PORT 4
#define DCBX_APP_PROTOCOL_ID_MASK 0xffff0000
#define DCBX_APP_PROTOCOL_ID_SHIFT 16
};

View File

@ -37,8 +37,8 @@
#define _QLCNIC_LINUX_MAJOR 5
#define _QLCNIC_LINUX_MINOR 3
#define _QLCNIC_LINUX_SUBVERSION 64
#define QLCNIC_LINUX_VERSIONID "5.3.64"
#define _QLCNIC_LINUX_SUBVERSION 65
#define QLCNIC_LINUX_VERSIONID "5.3.65"
#define QLCNIC_DRV_IDC_VER 0x01
#define QLCNIC_DRIVER_VERSION ((_QLCNIC_LINUX_MAJOR << 16) |\
(_QLCNIC_LINUX_MINOR << 8) | (_QLCNIC_LINUX_SUBVERSION))

View File

@ -102,7 +102,6 @@
#define QLCNIC_RESPONSE_DESC 0x05
#define QLCNIC_LRO_DESC 0x12
#define QLCNIC_TX_POLL_BUDGET 128
#define QLCNIC_TCP_HDR_SIZE 20
#define QLCNIC_TCP_TS_OPTION_SIZE 12
#define QLCNIC_FETCH_RING_ID(handle) ((handle) >> 63)
@ -2008,7 +2007,6 @@ static int qlcnic_83xx_msix_tx_poll(struct napi_struct *napi, int budget)
struct qlcnic_host_tx_ring *tx_ring;
struct qlcnic_adapter *adapter;
budget = QLCNIC_TX_POLL_BUDGET;
tx_ring = container_of(napi, struct qlcnic_host_tx_ring, napi);
adapter = tx_ring->adapter;
work_done = qlcnic_process_cmd_ring(adapter, tx_ring, budget);

Some files were not shown because too many files have changed in this diff Show More