inet_csk_get_port() randomization effort tends to spread
sockets on all the available range (ip_local_port_range)
This is unfortunate because SO_REUSEADDR sockets have
less requirements than non SO_REUSEADDR ones.
If an application uses SO_REUSEADDR hint, it is to try to
allow source ports being shared.
So instead of picking a random port number in ip_local_port_range,
lets try first in first half of the range.
This gives more chances to use upper half of the range for the
sockets with strong requirements (not using SO_REUSEADDR)
Note this patch does not add a new sysctl, and only changes
the way we try to pick port number.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Marcelo Ricardo Leitner <mleitner@redhat.com>
Cc: Flavio Leitner <fbl@redhat.com>
Acked-by: Flavio Leitner <fbl@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We no longer need bsocket atomic counter, as inet_csk_get_port()
calls bind_conflict() regardless of its value, after commit
2b05ad33e1 ("tcp: bind() fix autoselection to share ports")
This patch removes overhead of maintaining this counter and
double inet_csk_get_port() calls under pressure.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Marcelo Ricardo Leitner <mleitner@redhat.com>
Cc: Flavio Leitner <fbl@redhat.com>
Acked-by: Flavio Leitner <fbl@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We currently rely on the setting of SOCK_NOSPACE in the write()
path to ensure that we wake up any epoll edge trigger waiters when
acks return to free space in the write queue. However, if we fail
to allocate even a single skb in the write queue, we could end up
waiting indefinitely.
Fix this by explicitly issuing a wakeup when we detect the condition
of an empty write queue and a return value of -EAGAIN. This allows
userspace to re-try as we expect this to be a temporary failure.
I've tested this approach by artificially making
sk_stream_alloc_skb() return NULL periodically. In that case,
epoll edge trigger waiters will hang indefinitely in epoll_wait()
without this patch.
Signed-off-by: Jason Baron <jbaron@akamai.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Shenai says:
====================
cxgb4: Cleanup and update T4/T4 register ranges
This series cleans and optimizes setup_memwin function and also updates
T4/T5 adapter register ranges by removing incorrect register addresses
This patch series has been created against net-next tree and includes
patches on cxgb4 driver.
We have included all the maintainers of respective drivers. Kindly review
the change and let us know in case of any review comments.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove some T4/T5 registers that were included incorrectly.
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shradha Shah says:
====================
sfc: Get/Set MAC address and ndo_[set/get]_vf_* entrypoint functions
This is the second installment of patches towards supporting EF10 SRIOV.
This patch series implements the ndo_get_vf_config, ndo_set_vf_mac,
ndo_set_vf_vlan and ndo_set_vf_spoofcheck function callbacks for EF10.
This patch series also introduces privileges for the MCDI commands
based on which functions are allowed to call them, i.e. Link control
or primary function.
The patch series has been tested with and without CONFIG_SFC_SRIOV.
The ndo function callbacks are tested using ip link.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a set_mac_address() NIC-type function for EF10 only, and
use this to set the MAC address on the vadaptor. For Siena and
earlier, the MAC address continues to be set by MC_CMD_SET_MAC;
this is still called on EF10, and including a MAC address in
this command has no effect.
The sriov_mac_address_changed() NIC-type function is no longer
needed on EF10, but it is needed for Siena where it is used to
update the peer address of the PF for VFDI. Change this to use
the new set_mac_address function pointer.
efx_ef10_sriov_mac_address_changed() is no longer called, as VFs
will try to change the MAC address on their vadaptor rather than
trying to change to the context of the PF to alter the vport.
When a VF is running in direct passthrough mode with MAC spoofing
enabled, it will be able to change the MAC address on its vadaptor.
In this case, there is a link to the PF, so find the correct VF in
its ef10_vf array and update the MAC address.
ndo_set_mac_address() can be called during driver unload while
bonding, and in this case the device has already been stopped, so
don't call efx_net_open() to restart it after reconfiguration.
efx->port_enabled is set to false in efx_stop_port(), so it is
indicator of whether the device needs to be restarted.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Exercised with
"ip link set <PF intf> vf <vf_i> state {auto|enable|disable}"
Sets the reporting policy for VF link state to either
- mirror physical link state
- always up
- always down
get VF link state mode in efx_ef10_sriov_get_vf_config
Exercised by
"ip link show <PF intf>";
output will include a line like
vf 0 MAC 12:34:56:78:9a:bc, link-state auto
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The max vlan tags that can be offloaded is 2, including any upstream VLAN
aggregator. Currently there is no way for the net driver to know whether
the upstream vswitch (if any) is using vlan tags, so there is no way to
know how many tags we can request.
Along with the implementation for the ndo_set_vf_vlan callback, this patch
also adds 2 VLAN tags for the driver created VEB switch if possible, that
way it is possible to offload as many tags as are allowed.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently we do an entity reset when we detect an MC reboot.
This messes up SRIOV because it leaves VFs orphaned. The extra
reset is rather redundant anyway, since the MC reboot will have
basically reset everything.
This change replaces the entity reset after MC reboot with a
simpler datapath reset that reallocates resources but doesn't
perform the entity reset.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
rtnetlink calls ndo_get_vf_config when compiling information
about a network interface, so that the VFs associated with a PF
can be listed (eg: ip link show).
Implement a response to this entry point and return PF-set MAC
address for VF in ndo_get_vf_config
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Implement a response to this entrypoint.
The ndo_set_vf_mac() entrypoint is only exposed in the driver if
CONFIG_SFC_SRIOV is defined.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to avoid MC bugs the flags field needs to be set to 0.
Instead of explicitly clearing out the flags individually, a
better way to do this is to memset the MCDI_BUF to 0.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A VF's MAC address is set by its parent PF and added to its vport.
To get this MAC address, the VF must use MC_CMD_ VPORT_GET_MAC_ADDRESSES.
In the current scheme, a VF's vport should only have one MAC address,
so warn if this is not the case.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If MCDI timeouts are encountered during efx_ef10_filter_table_remove(),
an FLR will be queued, but efx->filter_state will still be kfree()d.
The queued FLR will then call efx_ef10_filter_table_restore(), which
will try to use efx->filter_state. This previously caused a panic.
This patch adds an rwsem to protect the existence of efx->filter_state,
separately from the spinlock protecting its contents. Users which can
race against efx_ef10_filter_table_remove() should down_read this rwsem.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Initialised in efx_probe_vf and removal is dealt with in
efx_ef10_remove.
vf->efx is needed in future patches to change the MAC address
of the VF via the parent PF, while the driver is bound to the
VF.
Example: ip link set dev vf NUM mac LLADDR
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Otherwise the PF and VF can disagree on the VF's MAC address and
this leads to strange behaviour, up to and including kernel panics.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Added function efx_ef10_get_vf_index to store the vf_index
in nic_data during probe
vf_index is needed in future patches to access a particular
VF in the VF data structure.
Moved efx_ef10_probe_pf and efx_ef10_probe_vf in order to
used efx_ef10_remove
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
MC_CMD_SET_MAC is privileged and can only by called by the link
control function.
This patch adds efx_ef10_mac_reconfigure_vf which avoids the call
to MC_CMD_SET_MAC by the Virtual function
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is one primary function per adaptor, one link control function
per port and the rest as categorised as general.
This patch adds privileges to the MCDI commands based on which
functions are allowed to call them.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This also matches with the sibling call netdev_alloc_skb_ip_align() made in
rx fast path.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
30 usecs (or really, 1 jiffy) can go by pretty fast.
Move the set of the timeout immediately before the loop.
Remove the unnecessary max(1ul, usecs_to_jiffies(30)) as
usecs_to_jiffies with a non-zero constant is guaranteed
to be non-zero.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Simon Horman says:
====================
rocker: transaction fixes
this series addresses what appear to be errors in the handling of
prepare and then commit transactions in the rocker driver.
In all cases the problem is that data structures visible outside of
the transaction are modified during the prepare phase.
In the case of the first two patches this results in the kernel reporting a
BUG. I have noted test-cases in the change logs.
The third patch is also a bug fix, as noted by Toshiaki Makita,
however I have not been able to reliably reproduce the problem and
thus have not provided a test case.
The last patch is a correctness fix that does not fix a bug
that manifests as far as I can tell.
Changes: v3->v4
* All patches
- Add Jiri Pirko's ack
* "rocker: do not make neighbour entry changes when preparing transactions"
- Setting of entry values in all transaction phases
as suggested by Toshiaki Makita
* "rocker: make rocker_port_internal_vlan_id_{get,put}() non-transactional"
- Remove Fixes tag as I believe this is a correctness rather than a bug fix
Changes: v2->v3
* "rocker: do not make neighbour entry changes when preparing transactions"
- Correct inverted logic
- Added ack from Scott Feldman
Changes: v1->v2
* "rocker: do not make neighbour entry changes when preparing transactions"
- Revised changelog to reflect information from Toshiaki Makita
that there is a bug that can manifest
- Update address and ttl regardless of the value of the transaction state
* All other patches
- Added acks from Scott Feldman
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The motivation for this is that rocker_port_internal_vlan_id_{get,put} appear
to only partially implement the transaction model: memory allocation
and freeing is transactional, but hash and bitmap manipulation is not.
The latter could be fixed, however, as it is not currently exercised
due to trans always being SWITCHDEV_TRANS_NONE it seems cleaner
to make rocker_port_internal_vlan_id_get non-transactional.
This problem was introduced by c4f20321d9 ("rocker: support
prepare-commit transaction model").
Found by inspection.
I do not believe that this change should have any run-time effect.
Acked-by: Scott Feldman <sfeldma@gmail.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
rocker_port_ipv4_nh() and in turn rocker_port_ipv4_neigh() may be
be called with trans == SWITCHDEV_TRANS_PREPARE and then
trans == SWITCHDEV_TRANS_COMMIT from switchdev_port_obj_set() via
fib_table_insert().
The first time that rocker_port_ipv4_nh() is called, with
trans == SWITCHDEV_TRANS_PREPARE, _rocker_neigh_add() adds a new entry to
the neigh table.
And the second time rocker_port_ipv4_nh() is called, with
trans == SWITCHDEV_TRANS_COMMIT, that entry is found. This causes
rocker_port_ipv4_nh() to believe it is not adding an entry and thus it
frees "entry", which is still present in rocker driver's neigh table.
This problem does not appear to affect deletion as my analysis is that
deletion is always performed with trans == SWITCHDEV_TRANS_NONE.
For completeness _rocker_neigh_{add,del,prepare} are updated not to
manipulate fib table entries if trans == SWITCHDEV_TRANS_PREPARE.
Fixes: c4f20321d9 ("rocker: support prepare-commit transaction model")
Reported-by: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Acked-by: Scott Feldman <sfeldma@gmail.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use the generic mechanism to declare a bitmap instead of unsigned long.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexei Starovoitov says:
====================
bpf: introduce bpf_tail_call() helper
introduce bpf_tail_call(ctx, &jmp_table, index) helper function
which can be used from BPF programs like:
int bpf_prog(struct pt_regs *ctx)
{
...
bpf_tail_call(ctx, &jmp_table, index);
...
}
that is roughly equivalent to:
int bpf_prog(struct pt_regs *ctx)
{
...
if (jmp_table[index])
return (*jmp_table[index])(ctx);
...
}
The important detail that it's not a normal call, but a tail call.
The kernel stack is precious, so this helper reuses the current
stack frame and jumps into another BPF program without adding
extra call frame.
It's trivially done in interpreter and a bit trickier in JITs.
Use cases:
- simplify complex programs
- dispatch into other programs
(for example: index in jump table can be syscall number or network protocol)
- build dynamic chains of programs
The chain of tail calls can form unpredictable dynamic loops therefore
tail_call_cnt is used to limit the number of calls and currently is set to 32.
patch 1 - support bpf_tail_call() in interpreter
patch 2 - support in x64 JIT
We've discussed what's neccessary to support it in arm64/s390 JITs
and it looks fine.
patch 3 - sample example for tracing
patch 4 - sample example for networking
More details in every patch.
This set went through several iterations of reviews/fixes and older
attempts can be seen:
https://git.kernel.org/cgit/linux/kernel/git/ast/bpf.git/log/?h=tail_call_v[123456]
- tail_call_v1 does it without touching JITs but introduces overhead
for all programs that don't use this helper function.
- tail_call_v2 still has some overhead and x64 JIT does full stack
unwind (prologue skipping optimization wasn't there)
- tail_call_v3 reuses 'call' instruction encoding and has interpreter
overhead for every normal call
- tail_call_v4 fixes above architectural shortcomings and v5,v6 fix few
more bugs
This last tail_call_v6 approach seems to be the best.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Usage:
$ sudo ./sockex3
IP src.port -> dst.port bytes packets
127.0.0.1.42010 -> 127.0.0.1.12865 1568 8
127.0.0.1.59526 -> 127.0.0.1.33778 11422636 173070
127.0.0.1.33778 -> 127.0.0.1.59526 11260224828 341974
127.0.0.1.12865 -> 127.0.0.1.42010 1832 12
IP src.port -> dst.port bytes packets
127.0.0.1.42010 -> 127.0.0.1.12865 1568 8
127.0.0.1.59526 -> 127.0.0.1.33778 23198092 351486
127.0.0.1.33778 -> 127.0.0.1.59526 22972698518 698616
127.0.0.1.12865 -> 127.0.0.1.42010 1832 12
this example is similar to sockex2 in a way that it accumulates per-flow
statistics, but it does packet parsing differently.
sockex2 inlines full packet parser routine into single bpf program.
This sockex3 example have 4 independent programs that parse vlan, mpls, ip, ipv6
and one main program that starts the process.
bpf_tail_call() mechanism allows each program to be small and be called
on demand potentially multiple times, so that many vlan, mpls, ip in ip,
gre encapsulations can be parsed. These and other protocol parsers can
be added or removed at runtime. TLVs can be parsed in similar manner.
Note, tail_call_cnt dynamic check limits the number of tail calls to 32.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
bpf_tail_call() arguments:
ctx - context pointer
jmp_table - one of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
index - index in the jump table
In this implementation x64 JIT bypasses stack unwind and jumps into the
callee program after prologue, so the callee program reuses the same stack.
The logic can be roughly expressed in C like:
u32 tail_call_cnt;
void *jumptable[2] = { &&label1, &&label2 };
int bpf_prog1(void *ctx)
{
label1:
...
}
int bpf_prog2(void *ctx)
{
label2:
...
}
int bpf_prog1(void *ctx)
{
...
if (tail_call_cnt++ < MAX_TAIL_CALL_CNT)
goto *jumptable[index]; ... and pass my 'ctx' to callee ...
... fall through if no entry in jumptable ...
}
Note that 'skip current program epilogue and next program prologue' is
an optimization. Other JITs don't have to do it the same way.
>From safety point of view it's valid as well, since programs always
initialize the stack before use, so any residue in the stack left by
the current program is not going be read. The same verifier checks are
done for the calls from the kernel into all bpf programs.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
introduce bpf_tail_call(ctx, &jmp_table, index) helper function
which can be used from BPF programs like:
int bpf_prog(struct pt_regs *ctx)
{
...
bpf_tail_call(ctx, &jmp_table, index);
...
}
that is roughly equivalent to:
int bpf_prog(struct pt_regs *ctx)
{
...
if (jmp_table[index])
return (*jmp_table[index])(ctx);
...
}
The important detail that it's not a normal call, but a tail call.
The kernel stack is precious, so this helper reuses the current
stack frame and jumps into another BPF program without adding
extra call frame.
It's trivially done in interpreter and a bit trickier in JITs.
In case of x64 JIT the bigger part of generated assembler prologue
is common for all programs, so it is simply skipped while jumping.
Other JITs can do similar prologue-skipping optimization or
do stack unwind before jumping into the next program.
bpf_tail_call() arguments:
ctx - context pointer
jmp_table - one of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
index - index in the jump table
Since all BPF programs are idenitified by file descriptor, user space
need to populate the jmp_table with FDs of other BPF programs.
If jmp_table[index] is empty the bpf_tail_call() doesn't jump anywhere
and program execution continues as normal.
New BPF_MAP_TYPE_PROG_ARRAY map type is introduced so that user space can
populate this jmp_table array with FDs of other bpf programs.
Programs can share the same jmp_table array or use multiple jmp_tables.
The chain of tail calls can form unpredictable dynamic loops therefore
tail_call_cnt is used to limit the number of calls and currently is set to 32.
Use cases:
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
==========
- simplify complex programs by splitting them into a sequence of small programs
- dispatch routine
For tracing and future seccomp the program may be triggered on all system
calls, but processing of syscall arguments will be different. It's more
efficient to implement them as:
int syscall_entry(struct seccomp_data *ctx)
{
bpf_tail_call(ctx, &syscall_jmp_table, ctx->nr /* syscall number */);
... default: process unknown syscall ...
}
int sys_write_event(struct seccomp_data *ctx) {...}
int sys_read_event(struct seccomp_data *ctx) {...}
syscall_jmp_table[__NR_write] = sys_write_event;
syscall_jmp_table[__NR_read] = sys_read_event;
For networking the program may call into different parsers depending on
packet format, like:
int packet_parser(struct __sk_buff *skb)
{
... parse L2, L3 here ...
__u8 ipproto = load_byte(skb, ... offsetof(struct iphdr, protocol));
bpf_tail_call(skb, &ipproto_jmp_table, ipproto);
... default: process unknown protocol ...
}
int parse_tcp(struct __sk_buff *skb) {...}
int parse_udp(struct __sk_buff *skb) {...}
ipproto_jmp_table[IPPROTO_TCP] = parse_tcp;
ipproto_jmp_table[IPPROTO_UDP] = parse_udp;
- for TC use case, bpf_tail_call() allows to implement reclassify-like logic
- bpf_map_update_elem/delete calls into BPF_MAP_TYPE_PROG_ARRAY jump table
are atomic, so user space can build chains of BPF programs on the fly
Implementation details:
=======================
- high performance of bpf_tail_call() is the goal.
It could have been implemented without JIT changes as a wrapper on top of
BPF_PROG_RUN() macro, but with two downsides:
. all programs would have to pay performance penalty for this feature and
tail call itself would be slower, since mandatory stack unwind, return,
stack allocate would be done for every tailcall.
. tailcall would be limited to programs running preempt_disabled, since
generic 'void *ctx' doesn't have room for 'tail_call_cnt' and it would
need to be either global per_cpu variable accessed by helper and by wrapper
or global variable protected by locks.
In this implementation x64 JIT bypasses stack unwind and jumps into the
callee program after prologue.
- bpf_prog_array_compatible() ensures that prog_type of callee and caller
are the same and JITed/non-JITed flag is the same, since calling JITed
program from non-JITed is invalid, since stack frames are different.
Similarly calling kprobe type program from socket type program is invalid.
- jump table is implemented as BPF_MAP_TYPE_PROG_ARRAY to reuse 'map'
abstraction, its user space API and all of verifier logic.
It's in the existing arraymap.c file, since several functions are
shared with regular array map.
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Reduce ifdef pollution slightly, no functional change. We can simply
remove the extra alternative definition of handle_ing() and nf_ingress().
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
In commit 8e4d980ac2 ("tcp: fix behavior for epoll edge trigger")
we fixed a possible hang of TCP sockets under memory pressure,
by allowing sk_stream_alloc_skb() to use sk_forced_mem_schedule()
if no packet is in socket write queue.
It turns out there are other cases where we want to force memory
schedule :
tcp_fragment() & tso_fragment() need to split a big TSO packet into
two smaller ones. If we block here because of TCP memory pressure,
we can effectively block TCP socket from sending new data.
If no further ACK is coming, this hang would be definitive, and socket
has no chance to effectively reduce its memory usage.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
[1] When entering NUD_PROBE state via neigh_update(), perhaps received
from userspace, correctly (re)initialize the probes count to zero.
This is useful for forcing revalidation of a neighbor (for example
if the host is attempting to do DNA [IPv4 4436, IPv6 6059]).
[2] Notify listeners when a neighbor goes into NUD_PROBE state.
By sending notifications on entry to NUD_PROBE state listeners get
more timely warnings of imminent connectivity issues.
The current notifications on entry to NUD_STALE have somewhat
limited usefulness: NUD_STALE is a perfectly normal state, as is
NUD_DELAY, whereas notifications on entry to NUD_FAILURE come after
a neighbor reachability problem has been confirmed (typically after
three probes).
Signed-off-by: Erik Kline <ek@google.com>
Acked-By: Lorenzo Colitti <lorenzo@google.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
ip_do_nat() function was removed prior to kernel 3.4. Remove the
unnecessary function prototype as well.
Reported-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Andy Zhou <azhou@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This work as a follow-up of commit f7b3bec6f5 ("net: allow setting ecn
via routing table") and adds RFC3168 section 6.1.1.1. fallback for outgoing
ECN connections. In other words, this work adds a retry with a non-ECN
setup SYN packet, as suggested from the RFC on the first timeout:
[...] A host that receives no reply to an ECN-setup SYN within the
normal SYN retransmission timeout interval MAY resend the SYN and
any subsequent SYN retransmissions with CWR and ECE cleared. [...]
Schematic client-side view when assuming the server is in tcp_ecn=2 mode,
that is, Linux default since 2009 via commit 255cac91c3 ("tcp: extend
ECN sysctl to allow server-side only ECN"):
1) Normal ECN-capable path:
SYN ECE CWR ----->
<----- SYN ACK ECE
ACK ----->
2) Path with broken middlebox, when client has fallback:
SYN ECE CWR ----X crappy middlebox drops packet
(timeout, rtx)
SYN ----->
<----- SYN ACK
ACK ----->
In case we would not have the fallback implemented, the middlebox drop
point would basically end up as:
SYN ECE CWR ----X crappy middlebox drops packet
(timeout, rtx)
SYN ECE CWR ----X crappy middlebox drops packet
(timeout, rtx)
SYN ECE CWR ----X crappy middlebox drops packet
(timeout, rtx)
In any case, it's rather a smaller percentage of sites where there would
occur such additional setup latency: it was found in end of 2014 that ~56%
of IPv4 and 65% of IPv6 servers of Alexa 1 million list would negotiate
ECN (aka tcp_ecn=2 default), 0.42% of these webservers will fail to connect
when trying to negotiate with ECN (tcp_ecn=1) due to timeouts, which the
fallback would mitigate with a slight latency trade-off. Recent related
paper on this topic:
Brian Trammell, Mirja Kühlewind, Damiano Boppart, Iain Learmonth,
Gorry Fairhurst, and Richard Scheffenegger:
"Enabling Internet-Wide Deployment of Explicit Congestion Notification."
Proc. PAM 2015, New York.
http://ecn.ethz.ch/ecn-pam15.pdf
Thus, when net.ipv4.tcp_ecn=1 is being set, the patch will perform RFC3168,
section 6.1.1.1. fallback on timeout. For users explicitly not wanting this
which can be in DC use case, we add a net.ipv4.tcp_ecn_fallback knob that
allows for disabling the fallback.
tp->ecn_flags are not being cleared in tcp_ecn_clear_syn() on output, but
rather we let tcp_ecn_rcv_synack() take that over on input path in case a
SYN ACK ECE was delayed. Thus a spurious SYN retransmission will not prevent
ECN being negotiated eventually in that case.
Reference: https://www.ietf.org/proceedings/92/slides/slides-92-iccrg-1.pdf
Reference: https://www.ietf.org/proceedings/89/slides/slides-89-tsvarea-1.pdf
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Mirja Kühlewind <mirja.kuehlewind@tik.ee.ethz.ch>
Signed-off-by: Brian Trammell <trammell@tik.ee.ethz.ch>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Dave That <dave.taht@gmail.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hariprasad Shenai says:
====================
cxgb4: Remove dead code and replace byte-oder functions
This series removes dead fn t4_read_edc and t4_read_mc, also replaces
ntoh{s,l} and hton{s,l} calls with the generic byteorder.
PATCH 2/2 was sent as a single PATCH, but had some byte-ordering issues
in t4_read_edc and t4_read_mc function. Found that t4_read_edc and
t4_read_mc is unused, so PATCH 1/2 is added to remove it.
This patch series is created against net-next tree and includes
patches on cxgb4 driver.
We have included all the maintainers of respective drivers. Kindly review
the change and let us know in case of any review comments.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
replace ntoh{s,l} and hton{s,l} calls with the generic byteorder in
cxgb4/t4_hw.c file
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
* LED throughput trigger was crashing
* fast-xmit wasn't treating QoS changes in IBSS correctly
* TDLS could use the wrong channel definition
* using a reserved channel context could use the wrong channel width
-----BEGIN PGP SIGNATURE-----
iQIcBAABCAAGBQJVWuPvAAoJEDBSmw7B7bqr/A4P/0TqzkCC5L2qJvi3a6QNFxvf
s3riQMJ8WQeUxCRNgFNeeeNUAgSJn3hhiINGrjRmkwXXxYC4mbCwM0YXNT+WhSRL
/Kx4mRJr7u5ZU0olW+KRvIV5CyTsbr9zVnaraCh5NV43nT87ZVZRBKC9vz2UkSM5
AsN6fUvhWMGhhHoGGDqtjRBjve8Xs5iKiEcE1iQTzLOPnFP3dKtB1zKKiA0JCQs4
OjxkQ7uaF0T1IfkMFr0gyzgQi4A8iPoMKV3qcRIH/QZN5dpJ6DR1dgaU50CrzQ+R
JD9W09ifF9U8GnvQU/baJHKCxEvnQWO2XwlV4+mV6bXF1j5Ng4LRiXntIeu2d3T3
5JuvPV9cNJb8dSTzsYw+TRJg73hStlJCAjVMJ7hiOMQc1YCCY9Exrff0pWzJPJfE
NygIkMHXymcy66yL3b7DIIXro5jHNVGVoHq3vMB+W+/EcEDFN6L9LeCzUVo+oKjl
Qg4kC7VHDjcdt0f7Vgv2Cal76ZVfCZaq74QZV1cySF2sCiD27LnAAfoHVeMY979K
qBsCRqhkBlc7ntnstv6tGz9LfG8ro+Fv548HIUDG80capZl6N6FR6g+8hYIuvJwu
2abJq36bp/NAGDt43UofmtDxyZNyvoKmzcQKSdn2QpryGKQ3uRcDHG/I+WD0yWX9
4WFNEm86sXmfL/Eyu0lU
=iUFe
-----END PGP SIGNATURE-----
Merge tag 'mac80211-next-for-davem-2015-05-19' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next
Johannes Berg says:
====================
This just has a few fixes:
* LED throughput trigger was crashing
* fast-xmit wasn't treating QoS changes in IBSS correctly
* TDLS could use the wrong channel definition
* using a reserved channel context could use the wrong channel width
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The hwmon interface in the be2net driver causes a link error when
be2net is built-in while the hwmon subsystem is a loadable module:
drivers/built-in.o: In function `be_probe':
drivers/net/ethernet/emulex/benet/be_main.c:5761: undefined reference to `devm_hwmon_device_register_with_groups'
This adds a new Kconfig symbol, following the example of multiple
other drivers that have the same problem. The new CONFIG_BE2NET_HWMON
will not be available when (BE2NET=y && HWMON=m) to avoid this
problem.
We have to also mark be_hwmon_show_temp as 'static' to ensure the
compiler can optimize out all the unused code.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 29e9122b3a ("be2net: Export board temperature using hwmon-sysfs interface.")
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently the getsockopt() requesting the cached contents of the syn
packet headers will fail silently if the caller uses a buffer that is
too small to contain the requested data. Rather than fail silently and
discard the headers, getsockopt() should return an error and report the
required size to hold the data.
Signed-off-by: Eric B Munson <emunson@akamai.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: James Morris <jmorris@namei.org>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Andy Zhou says:
====================
fragmentation ICMP
Currently, we send ICMP packets when errors occur during fragmentation or
de-fragmentation. However, it is a bug when sending those ICMP packets
in the context of using netfilter for bridging.
Those ICMP packets are only expected in the context of routing, not in
bridging mode.
The local stack is not involved in bridging forward decisions, thus
should be not used for deciding the reverse path for those ICMP messages.
This bug only affects IPV4, not in IPv6.
v1->v2: restructure the patches into two patches that fix defragmentation and
fragmentation respectively.
A bit is add in IPCB to control whether ICMP packet should be
generated for defragmentation.
Fragmentation ICMP is now removed by restructuring the
ip_fragment() API.
v2->v3: Add droping icmp for bridging contrack users
drop exporting ip_fragment() API.
v3->v4: Remove unnecessary parentheses in 'return' statements
v4->v5: Drop the patch that sets and checks a bit in IPCB
that prevents ip_defrag to send ICMP.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
When bridge netfilter re-fragments an IP packet for output, all
packets that can not be re-fragmented to their original input size
should be silently discarded.
However, current bridge netfilter output path generates an ICMP packet
with 'size exceeded MTU' message for such packets, this is a bug.
This patch refactors the ip_fragment() API to allow two separate
use cases. The bridge netfilter user case will not
send ICMP, the routing output will, as before.
Signed-off-by: Andy Zhou <azhou@nicira.com>
Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>