2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* IPv6 fragment reassembly
|
2007-02-09 22:24:49 +08:00
|
|
|
* Linux INET6 implementation
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* Authors:
|
2007-02-09 22:24:49 +08:00
|
|
|
* Pedro Roque <roque@di.fc.ul.pt>
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* Based on: net/ipv4/ip_fragment.c
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU General Public License
|
|
|
|
* as published by the Free Software Foundation; either version
|
|
|
|
* 2 of the License, or (at your option) any later version.
|
|
|
|
*/
|
|
|
|
|
2007-02-09 22:24:49 +08:00
|
|
|
/*
|
|
|
|
* Fixes:
|
2005-04-17 06:20:36 +08:00
|
|
|
* Andi Kleen Make it work with multiple hosts.
|
|
|
|
* More RFC compliance.
|
|
|
|
*
|
|
|
|
* Horst von Brand Add missing #include <linux/string.h>
|
|
|
|
* Alexey Kuznetsov SMP races, threading, cleanup.
|
|
|
|
* Patrick McHardy LRU queue of frag heads for evictor.
|
|
|
|
* Mitsuru KANDA @USAGI Register inet6_protocol{}.
|
|
|
|
* David Stevens and
|
|
|
|
* YOSHIFUJI,H. @USAGI Always remove fragment header to
|
|
|
|
* calculate ICV correctly.
|
|
|
|
*/
|
2013-03-15 19:32:30 +08:00
|
|
|
|
|
|
|
#define pr_fmt(fmt) "IPv6: " fmt
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/socket.h>
|
|
|
|
#include <linux/sockios.h>
|
|
|
|
#include <linux/jiffies.h>
|
|
|
|
#include <linux/net.h>
|
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/in6.h>
|
|
|
|
#include <linux/ipv6.h>
|
|
|
|
#include <linux/icmpv6.h>
|
|
|
|
#include <linux/random.h>
|
|
|
|
#include <linux/jhash.h>
|
2007-10-15 16:28:47 +08:00
|
|
|
#include <linux/skbuff.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2011-07-15 23:47:34 +08:00
|
|
|
#include <linux/export.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include <net/sock.h>
|
|
|
|
#include <net/snmp.h>
|
|
|
|
|
|
|
|
#include <net/ipv6.h>
|
2006-11-04 19:11:37 +08:00
|
|
|
#include <net/ip6_route.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/protocol.h>
|
|
|
|
#include <net/transp_v6.h>
|
|
|
|
#include <net/rawv6.h>
|
|
|
|
#include <net/ndisc.h>
|
|
|
|
#include <net/addrconf.h>
|
2007-10-15 17:24:19 +08:00
|
|
|
#include <net/inet_frag.h>
|
2013-03-22 16:24:44 +08:00
|
|
|
#include <net/inet_ecn.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-08-01 18:29:48 +08:00
|
|
|
static const char ip6_frag_cache_name[] = "ip6-frags";
|
|
|
|
|
2014-08-25 04:53:11 +08:00
|
|
|
struct ip6frag_skb_cb {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct inet6_skb_parm h;
|
|
|
|
int offset;
|
|
|
|
};
|
|
|
|
|
2014-08-25 04:53:10 +08:00
|
|
|
#define FRAG6_CB(skb) ((struct ip6frag_skb_cb *)((skb)->cb))
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-10-29 18:38:17 +08:00
|
|
|
static u8 ip6_frag_ecn(const struct ipv6hdr *ipv6h)
|
2013-03-22 16:24:44 +08:00
|
|
|
{
|
|
|
|
return 1 << (ipv6_get_dsfield(ipv6h) & INET_ECN_MASK);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 17:31:52 +08:00
|
|
|
static struct inet_frags ip6_frags;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 16:28:47 +08:00
|
|
|
static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,
|
|
|
|
struct net_device *dev);
|
|
|
|
|
2014-07-24 22:50:29 +08:00
|
|
|
void ip6_frag_init(struct inet_frag_queue *q, const void *a)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2007-10-18 10:46:47 +08:00
|
|
|
struct frag_queue *fq = container_of(q, struct frag_queue, q);
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
const struct frag_v6_compare_key *key = a;
|
2007-10-18 10:46:47 +08:00
|
|
|
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
q->key.v6 = *key;
|
|
|
|
fq->ecn = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2007-10-18 10:46:47 +08:00
|
|
|
EXPORT_SYMBOL(ip6_frag_init);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-04-01 03:58:44 +08:00
|
|
|
void ip6_expire_frag_queue(struct net *net, struct frag_queue *fq)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-11-04 19:11:37 +08:00
|
|
|
struct net_device *dev = NULL;
|
2007-10-18 10:45:23 +08:00
|
|
|
|
2007-10-15 17:24:19 +08:00
|
|
|
spin_lock(&fq->q.lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-08-01 18:29:44 +08:00
|
|
|
if (fq->q.flags & INET_FRAG_COMPLETE)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out;
|
|
|
|
|
2018-04-01 03:58:44 +08:00
|
|
|
inet_frag_kill(&fq->q);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-11-06 12:59:47 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
dev = dev_get_by_index_rcu(net, fq->iif);
|
2006-11-04 19:11:37 +08:00
|
|
|
if (!dev)
|
2009-11-06 12:59:47 +08:00
|
|
|
goto out_rcu_unlock;
|
2006-11-04 19:11:37 +08:00
|
|
|
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMFAILS);
|
|
|
|
__IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMTIMEOUT);
|
2014-08-01 18:29:47 +08:00
|
|
|
|
2006-03-21 15:01:17 +08:00
|
|
|
/* Don't send error if the first segment did not arrive. */
|
2014-08-01 18:29:44 +08:00
|
|
|
if (!(fq->q.flags & INET_FRAG_FIRST_IN) || !fq->q.fragments)
|
2009-11-06 12:59:47 +08:00
|
|
|
goto out_rcu_unlock;
|
2006-03-21 15:01:17 +08:00
|
|
|
|
2014-08-01 18:29:47 +08:00
|
|
|
/* But use as source device on which LAST ARRIVED
|
|
|
|
* segment was received. And do not use fq->dev
|
|
|
|
* pointer directly, device might already disappeared.
|
2006-03-21 15:01:17 +08:00
|
|
|
*/
|
2007-10-15 17:24:19 +08:00
|
|
|
fq->q.fragments->dev = dev;
|
2010-02-18 16:25:24 +08:00
|
|
|
icmpv6_send(fq->q.fragments, ICMPV6_TIME_EXCEED, ICMPV6_EXC_FRAGTIME, 0);
|
2009-11-06 12:59:47 +08:00
|
|
|
out_rcu_unlock:
|
|
|
|
rcu_read_unlock();
|
2005-04-17 06:20:36 +08:00
|
|
|
out:
|
2007-10-15 17:24:19 +08:00
|
|
|
spin_unlock(&fq->q.lock);
|
2018-04-01 03:58:44 +08:00
|
|
|
inet_frag_put(&fq->q);
|
2012-09-19 00:50:09 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ip6_expire_frag_queue);
|
|
|
|
|
2017-10-17 08:29:20 +08:00
|
|
|
static void ip6_frag_expire(struct timer_list *t)
|
2012-09-19 00:50:09 +08:00
|
|
|
{
|
2017-10-17 08:29:20 +08:00
|
|
|
struct inet_frag_queue *frag = from_timer(frag, t, timer);
|
2012-09-19 00:50:09 +08:00
|
|
|
struct frag_queue *fq;
|
|
|
|
struct net *net;
|
|
|
|
|
2017-10-17 08:29:20 +08:00
|
|
|
fq = container_of(frag, struct frag_queue, q);
|
2012-09-19 00:50:09 +08:00
|
|
|
net = container_of(fq->q.net, struct net, ipv6.frags);
|
|
|
|
|
2018-04-01 03:58:44 +08:00
|
|
|
ip6_expire_frag_queue(net, fq);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-10-29 18:38:17 +08:00
|
|
|
static struct frag_queue *
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
fq_find(struct net *net, __be32 id, const struct ipv6hdr *hdr, int iif)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
struct frag_v6_compare_key key = {
|
|
|
|
.id = id,
|
|
|
|
.saddr = hdr->saddr,
|
|
|
|
.daddr = hdr->daddr,
|
|
|
|
.user = IP6_DEFRAG_LOCAL_DELIVER,
|
|
|
|
.iif = iif,
|
|
|
|
};
|
2007-10-18 10:46:47 +08:00
|
|
|
struct inet_frag_queue *q;
|
2008-06-28 11:06:08 +08:00
|
|
|
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
if (!(ipv6_addr_type(&hdr->daddr) & (IPV6_ADDR_MULTICAST |
|
|
|
|
IPV6_ADDR_LINKLOCAL)))
|
|
|
|
key.iif = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
q = inet_frag_find(&net->ipv6.frags, &key);
|
2018-04-01 03:58:52 +08:00
|
|
|
if (!q)
|
2010-02-11 08:12:45 +08:00
|
|
|
return NULL;
|
2018-04-01 03:58:52 +08:00
|
|
|
|
2007-10-18 10:46:47 +08:00
|
|
|
return container_of(q, struct frag_queue, q);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2007-10-15 16:28:47 +08:00
|
|
|
static int ip6_frag_queue(struct frag_queue *fq, struct sk_buff *skb,
|
2005-04-17 06:20:36 +08:00
|
|
|
struct frag_hdr *fhdr, int nhoff)
|
|
|
|
{
|
|
|
|
struct sk_buff *prev, *next;
|
2007-10-15 16:28:47 +08:00
|
|
|
struct net_device *dev;
|
2016-11-02 23:02:18 +08:00
|
|
|
int offset, end, fragsize;
|
2009-06-02 13:19:30 +08:00
|
|
|
struct net *net = dev_net(skb_dst(skb)->dev);
|
2013-03-22 16:24:44 +08:00
|
|
|
u8 ecn;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-08-01 18:29:44 +08:00
|
|
|
if (fq->q.flags & INET_FRAG_COMPLETE)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto err;
|
|
|
|
|
|
|
|
offset = ntohs(fhdr->frag_off) & ~0x7;
|
2007-04-26 08:54:47 +08:00
|
|
|
end = offset + (ntohs(ipv6_hdr(skb)->payload_len) -
|
|
|
|
((u8 *)(fhdr + 1) - (u8 *)(ipv6_hdr(skb) + 1)));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if ((unsigned int)end > IPV6_MAXPLEN) {
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
|
|
|
|
IPSTATS_MIB_INHDRERRORS);
|
2007-04-11 11:50:43 +08:00
|
|
|
icmpv6_param_prob(skb, ICMPV6_HDR_FIELD,
|
|
|
|
((u8 *)&fhdr->frag_off -
|
|
|
|
skb_network_header(skb)));
|
2007-10-15 16:28:47 +08:00
|
|
|
return -1;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2013-03-22 16:24:44 +08:00
|
|
|
ecn = ip6_frag_ecn(ipv6_hdr(skb));
|
|
|
|
|
2007-04-11 11:50:43 +08:00
|
|
|
if (skb->ip_summed == CHECKSUM_COMPLETE) {
|
|
|
|
const unsigned char *nh = skb_network_header(skb);
|
2007-02-09 22:24:49 +08:00
|
|
|
skb->csum = csum_sub(skb->csum,
|
2007-04-11 11:50:43 +08:00
|
|
|
csum_partial(nh, (u8 *)(fhdr + 1) - nh,
|
|
|
|
0));
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Is this the final fragment? */
|
|
|
|
if (!(fhdr->frag_off & htons(IP6_MF))) {
|
|
|
|
/* If we already have some bits beyond end
|
|
|
|
* or have different end, the segment is corrupted.
|
|
|
|
*/
|
2007-10-15 17:24:19 +08:00
|
|
|
if (end < fq->q.len ||
|
2014-08-01 18:29:44 +08:00
|
|
|
((fq->q.flags & INET_FRAG_LAST_IN) && end != fq->q.len))
|
2005-04-17 06:20:36 +08:00
|
|
|
goto err;
|
2014-08-01 18:29:44 +08:00
|
|
|
fq->q.flags |= INET_FRAG_LAST_IN;
|
2007-10-15 17:24:19 +08:00
|
|
|
fq->q.len = end;
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
|
|
|
/* Check if the fragment is rounded to 8 bytes.
|
|
|
|
* Required by the RFC.
|
|
|
|
*/
|
|
|
|
if (end & 0x7) {
|
|
|
|
/* RFC2460 says always send parameter problem in
|
|
|
|
* this case. -DaveM
|
|
|
|
*/
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
|
|
|
|
IPSTATS_MIB_INHDRERRORS);
|
2007-02-09 22:24:49 +08:00
|
|
|
icmpv6_param_prob(skb, ICMPV6_HDR_FIELD,
|
2005-04-17 06:20:36 +08:00
|
|
|
offsetof(struct ipv6hdr, payload_len));
|
2007-10-15 16:28:47 +08:00
|
|
|
return -1;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2007-10-15 17:24:19 +08:00
|
|
|
if (end > fq->q.len) {
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Some bits beyond end -> corruption. */
|
2014-08-01 18:29:44 +08:00
|
|
|
if (fq->q.flags & INET_FRAG_LAST_IN)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto err;
|
2007-10-15 17:24:19 +08:00
|
|
|
fq->q.len = end;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (end == offset)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
/* Point into the IP datagram 'data' part. */
|
|
|
|
if (!pskb_pull(skb, (u8 *) (fhdr + 1) - skb->data))
|
|
|
|
goto err;
|
2007-02-09 22:24:49 +08:00
|
|
|
|
2005-09-09 03:57:43 +08:00
|
|
|
if (pskb_trim_rcsum(skb, end - offset))
|
|
|
|
goto err;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Find out which fragments are in front and at the back of us
|
|
|
|
* in the chain of fragments so far. We must know where to put
|
|
|
|
* this fragment, right?
|
|
|
|
*/
|
2010-06-29 12:39:37 +08:00
|
|
|
prev = fq->q.fragments_tail;
|
|
|
|
if (!prev || FRAG6_CB(prev)->offset < offset) {
|
|
|
|
next = NULL;
|
|
|
|
goto found;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
prev = NULL;
|
2014-08-25 04:53:10 +08:00
|
|
|
for (next = fq->q.fragments; next != NULL; next = next->next) {
|
2005-04-17 06:20:36 +08:00
|
|
|
if (FRAG6_CB(next)->offset >= offset)
|
|
|
|
break; /* bingo! */
|
|
|
|
prev = next;
|
|
|
|
}
|
|
|
|
|
2010-06-29 12:39:37 +08:00
|
|
|
found:
|
2012-01-30 12:29:24 +08:00
|
|
|
/* RFC5722, Section 4, amended by Errata ID : 3089
|
|
|
|
* When reassembling an IPv6 datagram, if
|
2010-09-03 13:13:05 +08:00
|
|
|
* one or more its constituent fragments is determined to be an
|
|
|
|
* overlapping fragment, the entire datagram (and any constituent
|
2012-01-30 12:29:24 +08:00
|
|
|
* fragments) MUST be silently discarded.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2010-09-03 13:13:05 +08:00
|
|
|
/* Check for overlap with preceding fragment. */
|
|
|
|
if (prev &&
|
2010-11-05 09:56:34 +08:00
|
|
|
(FRAG6_CB(prev)->offset + prev->len) > offset)
|
2010-09-03 13:13:05 +08:00
|
|
|
goto discard_fq;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-03 13:13:05 +08:00
|
|
|
/* Look for overlap with succeeding segment. */
|
|
|
|
if (next && FRAG6_CB(next)->offset < end)
|
|
|
|
goto discard_fq;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
FRAG6_CB(skb)->offset = offset;
|
|
|
|
|
|
|
|
/* Insert this fragment in the chain of fragments. */
|
|
|
|
skb->next = next;
|
2010-06-29 12:39:37 +08:00
|
|
|
if (!next)
|
|
|
|
fq->q.fragments_tail = skb;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (prev)
|
|
|
|
prev->next = skb;
|
|
|
|
else
|
2007-10-15 17:24:19 +08:00
|
|
|
fq->q.fragments = skb;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 16:28:47 +08:00
|
|
|
dev = skb->dev;
|
|
|
|
if (dev) {
|
|
|
|
fq->iif = dev->ifindex;
|
|
|
|
skb->dev = NULL;
|
|
|
|
}
|
2007-10-15 17:24:19 +08:00
|
|
|
fq->q.stamp = skb->tstamp;
|
|
|
|
fq->q.meat += skb->len;
|
2013-03-22 16:24:44 +08:00
|
|
|
fq->ecn |= ecn;
|
2015-07-23 18:05:38 +08:00
|
|
|
add_frag_mem_limit(fq->q.net, skb->truesize);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2016-11-02 23:02:18 +08:00
|
|
|
fragsize = -skb_network_offset(skb) + skb->len;
|
|
|
|
if (fragsize > fq->q.max_size)
|
|
|
|
fq->q.max_size = fragsize;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* The first fragment.
|
|
|
|
* nhoffset is obtained from the first fragment, of course.
|
|
|
|
*/
|
|
|
|
if (offset == 0) {
|
|
|
|
fq->nhoffset = nhoff;
|
2014-08-01 18:29:44 +08:00
|
|
|
fq->q.flags |= INET_FRAG_FIRST_IN;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2007-10-15 16:28:47 +08:00
|
|
|
|
2014-08-01 18:29:44 +08:00
|
|
|
if (fq->q.flags == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) &&
|
2013-04-16 20:55:41 +08:00
|
|
|
fq->q.meat == fq->q.len) {
|
|
|
|
int res;
|
|
|
|
unsigned long orefdst = skb->_skb_refdst;
|
|
|
|
|
|
|
|
skb->_skb_refdst = 0UL;
|
|
|
|
res = ip6_frag_reasm(fq, prev, dev);
|
|
|
|
skb->_skb_refdst = orefdst;
|
|
|
|
return res;
|
|
|
|
}
|
2007-10-15 16:28:47 +08:00
|
|
|
|
2013-04-16 20:55:41 +08:00
|
|
|
skb_dst_drop(skb);
|
2007-10-15 16:28:47 +08:00
|
|
|
return -1;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-03 13:13:05 +08:00
|
|
|
discard_fq:
|
2018-04-01 03:58:44 +08:00
|
|
|
inet_frag_kill(&fq->q);
|
2005-04-17 06:20:36 +08:00
|
|
|
err:
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
|
|
|
|
IPSTATS_MIB_REASMFAILS);
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree_skb(skb);
|
2007-10-15 16:28:47 +08:00
|
|
|
return -1;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check if this packet is complete.
|
|
|
|
* Returns NULL on failure by any reason, and pointer
|
|
|
|
* to current nexthdr field in reassembled frame.
|
|
|
|
*
|
|
|
|
* It is called with locked fq, and caller must check that
|
|
|
|
* queue is eligible for reassembly i.e. it is not COMPLETE,
|
|
|
|
* the last and the first frames arrived and all the bits are here.
|
|
|
|
*/
|
2007-10-15 16:28:47 +08:00
|
|
|
static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,
|
2005-04-17 06:20:36 +08:00
|
|
|
struct net_device *dev)
|
|
|
|
{
|
2009-03-19 14:26:11 +08:00
|
|
|
struct net *net = container_of(fq->q.net, struct net, ipv6.frags);
|
2007-10-15 17:24:19 +08:00
|
|
|
struct sk_buff *fp, *head = fq->q.fragments;
|
2005-04-17 06:20:36 +08:00
|
|
|
int payload_len;
|
|
|
|
unsigned int nhoff;
|
2012-05-19 11:02:35 +08:00
|
|
|
int sum_truesize;
|
2013-03-22 16:24:44 +08:00
|
|
|
u8 ecn;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-04-01 03:58:44 +08:00
|
|
|
inet_frag_kill(&fq->q);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-03-22 16:24:44 +08:00
|
|
|
ecn = ip_frag_ecn_table[fq->ecn];
|
|
|
|
if (unlikely(ecn == 0xff))
|
|
|
|
goto out_fail;
|
|
|
|
|
2007-10-15 16:28:47 +08:00
|
|
|
/* Make the one we just received the head. */
|
|
|
|
if (prev) {
|
|
|
|
head = prev->next;
|
|
|
|
fp = skb_clone(head, GFP_ATOMIC);
|
|
|
|
|
|
|
|
if (!fp)
|
|
|
|
goto out_oom;
|
|
|
|
|
|
|
|
fp->next = head->next;
|
2010-06-29 12:39:37 +08:00
|
|
|
if (!fp->next)
|
|
|
|
fq->q.fragments_tail = fp;
|
2007-10-15 16:28:47 +08:00
|
|
|
prev->next = fp;
|
|
|
|
|
2007-10-15 17:24:19 +08:00
|
|
|
skb_morph(head, fq->q.fragments);
|
|
|
|
head->next = fq->q.fragments->next;
|
2007-10-15 16:28:47 +08:00
|
|
|
|
2012-04-24 18:17:59 +08:00
|
|
|
consume_skb(fq->q.fragments);
|
2007-10-15 17:24:19 +08:00
|
|
|
fq->q.fragments = head;
|
2007-10-15 16:28:47 +08:00
|
|
|
}
|
|
|
|
|
2008-07-26 12:43:18 +08:00
|
|
|
WARN_ON(head == NULL);
|
|
|
|
WARN_ON(FRAG6_CB(head)->offset != 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Unfragmented part is taken from the first segment. */
|
2007-04-11 11:50:43 +08:00
|
|
|
payload_len = ((head->data - skb_network_header(head)) -
|
2007-10-15 17:24:19 +08:00
|
|
|
sizeof(struct ipv6hdr) + fq->q.len -
|
2007-04-11 11:50:43 +08:00
|
|
|
sizeof(struct frag_hdr));
|
2005-04-17 06:20:36 +08:00
|
|
|
if (payload_len > IPV6_MAXPLEN)
|
|
|
|
goto out_oversize;
|
|
|
|
|
|
|
|
/* Head of list must not be cloned. */
|
2013-02-14 17:44:49 +08:00
|
|
|
if (skb_unclone(head, GFP_ATOMIC))
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out_oom;
|
|
|
|
|
|
|
|
/* If the first fragment is fragmented itself, we split
|
|
|
|
* it to two chunks: the first with data and paged part
|
|
|
|
* and the second, holding only fragments. */
|
2010-08-23 15:13:46 +08:00
|
|
|
if (skb_has_frag_list(head)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct sk_buff *clone;
|
|
|
|
int i, plen = 0;
|
|
|
|
|
2014-11-24 05:28:43 +08:00
|
|
|
clone = alloc_skb(0, GFP_ATOMIC);
|
2015-03-29 21:00:04 +08:00
|
|
|
if (!clone)
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out_oom;
|
|
|
|
clone->next = head->next;
|
|
|
|
head->next = clone;
|
|
|
|
skb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;
|
2009-06-09 15:20:05 +08:00
|
|
|
skb_frag_list_init(head);
|
2011-10-19 05:00:24 +08:00
|
|
|
for (i = 0; i < skb_shinfo(head)->nr_frags; i++)
|
|
|
|
plen += skb_frag_size(&skb_shinfo(head)->frags[i]);
|
2005-04-17 06:20:36 +08:00
|
|
|
clone->len = clone->data_len = head->data_len - plen;
|
|
|
|
head->data_len -= clone->len;
|
|
|
|
head->len -= clone->len;
|
|
|
|
clone->csum = 0;
|
|
|
|
clone->ip_summed = head->ip_summed;
|
2015-07-23 18:05:38 +08:00
|
|
|
add_frag_mem_limit(fq->q.net, clone->truesize);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* We have to remove fragment header from datagram and to relocate
|
|
|
|
* header in order to calculate ICV correctly. */
|
|
|
|
nhoff = fq->nhoffset;
|
2007-04-11 12:21:55 +08:00
|
|
|
skb_network_header(head)[nhoff] = skb_transport_header(head)[0];
|
2007-02-09 22:24:49 +08:00
|
|
|
memmove(head->head + sizeof(struct frag_hdr), head->head,
|
2005-04-17 06:20:36 +08:00
|
|
|
(head->data - head->head) - sizeof(struct frag_hdr));
|
2016-10-21 17:28:25 +08:00
|
|
|
if (skb_mac_header_was_set(head))
|
|
|
|
head->mac_header += sizeof(struct frag_hdr);
|
2007-04-11 12:21:55 +08:00
|
|
|
head->network_header += sizeof(struct frag_hdr);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-03-14 00:06:52 +08:00
|
|
|
skb_reset_transport_header(head);
|
2007-04-11 11:50:43 +08:00
|
|
|
skb_push(head, head->data - skb_network_header(head));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-05-19 11:02:35 +08:00
|
|
|
sum_truesize = head->truesize;
|
|
|
|
for (fp = head->next; fp;) {
|
|
|
|
bool headstolen;
|
|
|
|
int delta;
|
|
|
|
struct sk_buff *next = fp->next;
|
|
|
|
|
|
|
|
sum_truesize += fp->truesize;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (head->ip_summed != fp->ip_summed)
|
|
|
|
head->ip_summed = CHECKSUM_NONE;
|
2006-08-30 07:44:56 +08:00
|
|
|
else if (head->ip_summed == CHECKSUM_COMPLETE)
|
2005-04-17 06:20:36 +08:00
|
|
|
head->csum = csum_add(head->csum, fp->csum);
|
2012-05-19 11:02:35 +08:00
|
|
|
|
|
|
|
if (skb_try_coalesce(head, fp, &headstolen, &delta)) {
|
|
|
|
kfree_skb_partial(fp, headstolen);
|
|
|
|
} else {
|
|
|
|
if (!skb_shinfo(head)->frag_list)
|
|
|
|
skb_shinfo(head)->frag_list = fp;
|
|
|
|
head->data_len += fp->len;
|
|
|
|
head->len += fp->len;
|
|
|
|
head->truesize += fp->truesize;
|
|
|
|
}
|
|
|
|
fp = next;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2015-07-23 18:05:38 +08:00
|
|
|
sub_frag_mem_limit(fq->q.net, sum_truesize);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
head->next = NULL;
|
|
|
|
head->dev = dev;
|
2007-10-15 17:24:19 +08:00
|
|
|
head->tstamp = fq->q.stamp;
|
2007-04-26 08:54:47 +08:00
|
|
|
ipv6_hdr(head)->payload_len = htons(payload_len);
|
2013-03-22 16:24:44 +08:00
|
|
|
ipv6_change_dsfield(ipv6_hdr(head), 0xff, ecn);
|
2006-01-07 15:02:34 +08:00
|
|
|
IP6CB(head)->nhoff = nhoff;
|
2013-08-16 19:30:07 +08:00
|
|
|
IP6CB(head)->flags |= IP6SKB_FRAGMENTED;
|
2016-11-02 23:02:18 +08:00
|
|
|
IP6CB(head)->frag_max_size = fq->q.max_size;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Yes, and fold redundant checksum back. 8) */
|
2016-02-20 07:29:30 +08:00
|
|
|
skb_postpush_rcsum(head, skb_network_header(head),
|
|
|
|
skb_network_header_len(head));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-11-04 19:11:37 +08:00
|
|
|
rcu_read_lock();
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMOKS);
|
2006-11-04 19:11:37 +08:00
|
|
|
rcu_read_unlock();
|
2007-10-15 17:24:19 +08:00
|
|
|
fq->q.fragments = NULL;
|
2010-06-29 12:39:37 +08:00
|
|
|
fq->q.fragments_tail = NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
return 1;
|
|
|
|
|
|
|
|
out_oversize:
|
2012-05-14 05:56:26 +08:00
|
|
|
net_dbg_ratelimited("ip6_frag_reasm: payload len = %d\n", payload_len);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out_fail;
|
|
|
|
out_oom:
|
2012-05-14 05:56:26 +08:00
|
|
|
net_dbg_ratelimited("ip6_frag_reasm: no memory for reassembly\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
out_fail:
|
2006-11-04 19:11:37 +08:00
|
|
|
rcu_read_lock();
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_REASMFAILS);
|
2006-11-04 19:11:37 +08:00
|
|
|
rcu_read_unlock();
|
2005-04-17 06:20:36 +08:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2007-10-16 03:50:28 +08:00
|
|
|
static int ipv6_frag_rcv(struct sk_buff *skb)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct frag_hdr *fhdr;
|
|
|
|
struct frag_queue *fq;
|
2011-04-22 12:53:02 +08:00
|
|
|
const struct ipv6hdr *hdr = ipv6_hdr(skb);
|
2009-06-02 13:19:30 +08:00
|
|
|
struct net *net = dev_net(skb_dst(skb)->dev);
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
int iif;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-08-16 19:30:07 +08:00
|
|
|
if (IP6CB(skb)->flags & IP6SKB_FRAGMENTED)
|
|
|
|
goto fail_hdr;
|
|
|
|
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMREQDS);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Jumbo payload inhibits frag. header */
|
2014-08-25 04:53:10 +08:00
|
|
|
if (hdr->payload_len == 0)
|
2008-10-09 01:31:44 +08:00
|
|
|
goto fail_hdr;
|
|
|
|
|
2007-04-26 08:55:53 +08:00
|
|
|
if (!pskb_may_pull(skb, (skb_transport_offset(skb) +
|
2008-10-09 01:31:44 +08:00
|
|
|
sizeof(struct frag_hdr))))
|
|
|
|
goto fail_hdr;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-04-26 08:54:47 +08:00
|
|
|
hdr = ipv6_hdr(skb);
|
2007-04-26 09:04:18 +08:00
|
|
|
fhdr = (struct frag_hdr *)skb_transport_header(skb);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (!(fhdr->frag_off & htons(0xFFF9))) {
|
|
|
|
/* It is not a fragmented frame */
|
2007-04-11 12:21:55 +08:00
|
|
|
skb->transport_header += sizeof(struct frag_hdr);
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net,
|
|
|
|
ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMOKS);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-04-11 11:50:43 +08:00
|
|
|
IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb);
|
2013-08-16 19:30:07 +08:00
|
|
|
IP6CB(skb)->flags |= IP6SKB_FRAGMENTED;
|
2005-04-17 06:20:36 +08:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
iif = skb->dev ? skb->dev->ifindex : 0;
|
|
|
|
fq = fq_find(net, fhdr->identification, hdr, iif);
|
2015-03-29 21:00:05 +08:00
|
|
|
if (fq) {
|
2007-10-15 16:28:47 +08:00
|
|
|
int ret;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 17:24:19 +08:00
|
|
|
spin_lock(&fq->q.lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
fq->iif = iif;
|
2007-10-15 16:28:47 +08:00
|
|
|
ret = ip6_frag_queue(fq, skb, fhdr, IP6CB(skb)->nhoff);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 17:24:19 +08:00
|
|
|
spin_unlock(&fq->q.lock);
|
2018-04-01 03:58:44 +08:00
|
|
|
inet_frag_put(&fq->q);
|
2005-04-17 06:20:36 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMFAILS);
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree_skb(skb);
|
|
|
|
return -1;
|
2008-10-09 01:31:44 +08:00
|
|
|
|
|
|
|
fail_hdr:
|
2016-04-28 07:44:40 +08:00
|
|
|
__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
|
|
|
|
IPSTATS_MIB_INHDRERRORS);
|
2008-10-09 01:31:44 +08:00
|
|
|
icmpv6_param_prob(skb, ICMPV6_HDR_FIELD, skb_network_header_len(skb));
|
|
|
|
return -1;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-08-25 04:53:11 +08:00
|
|
|
static const struct inet6_protocol frag_protocol = {
|
2005-04-17 06:20:36 +08:00
|
|
|
.handler = ipv6_frag_rcv,
|
|
|
|
.flags = INET6_PROTO_NOPOLICY,
|
|
|
|
};
|
|
|
|
|
2008-01-22 21:58:31 +08:00
|
|
|
#ifdef CONFIG_SYSCTL
|
2014-07-24 22:50:37 +08:00
|
|
|
static int zero;
|
|
|
|
|
2008-05-20 04:51:29 +08:00
|
|
|
static struct ctl_table ip6_frags_ns_ctl_table[] = {
|
2008-01-22 21:58:31 +08:00
|
|
|
{
|
|
|
|
.procname = "ip6frag_high_thresh",
|
2008-01-22 22:10:13 +08:00
|
|
|
.data = &init_net.ipv6.frags.high_thresh,
|
2008-01-22 21:58:31 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2014-07-24 22:50:37 +08:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &init_net.ipv6.frags.low_thresh
|
2008-01-22 21:58:31 +08:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "ip6frag_low_thresh",
|
2008-01-22 22:10:13 +08:00
|
|
|
.data = &init_net.ipv6.frags.low_thresh,
|
2008-01-22 21:58:31 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2014-07-24 22:50:37 +08:00
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &init_net.ipv6.frags.high_thresh
|
2008-01-22 21:58:31 +08:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "ip6frag_time",
|
2008-01-22 22:09:37 +08:00
|
|
|
.data = &init_net.ipv6.frags.timeout,
|
2008-01-22 21:58:31 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2008-11-04 10:21:05 +08:00
|
|
|
.proc_handler = proc_dointvec_jiffies,
|
2008-01-22 21:58:31 +08:00
|
|
|
},
|
2008-05-20 04:53:02 +08:00
|
|
|
{ }
|
|
|
|
};
|
|
|
|
|
2014-07-24 22:50:35 +08:00
|
|
|
/* secret interval has been deprecated */
|
|
|
|
static int ip6_frags_secret_interval_unused;
|
2008-05-20 04:53:02 +08:00
|
|
|
static struct ctl_table ip6_frags_ctl_table[] = {
|
2008-01-22 21:58:31 +08:00
|
|
|
{
|
|
|
|
.procname = "ip6frag_secret_interval",
|
2014-07-24 22:50:35 +08:00
|
|
|
.data = &ip6_frags_secret_interval_unused,
|
2008-01-22 21:58:31 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2008-11-04 10:21:05 +08:00
|
|
|
.proc_handler = proc_dointvec_jiffies,
|
2008-01-22 21:58:31 +08:00
|
|
|
},
|
|
|
|
{ }
|
|
|
|
};
|
|
|
|
|
2010-01-17 11:35:32 +08:00
|
|
|
static int __net_init ip6_frags_ns_sysctl_register(struct net *net)
|
2008-01-22 21:58:31 +08:00
|
|
|
{
|
2008-01-22 22:08:36 +08:00
|
|
|
struct ctl_table *table;
|
2008-01-22 21:58:31 +08:00
|
|
|
struct ctl_table_header *hdr;
|
|
|
|
|
2008-05-20 04:51:29 +08:00
|
|
|
table = ip6_frags_ns_ctl_table;
|
2009-11-26 07:14:13 +08:00
|
|
|
if (!net_eq(net, &init_net)) {
|
2008-05-20 04:51:29 +08:00
|
|
|
table = kmemdup(table, sizeof(ip6_frags_ns_ctl_table), GFP_KERNEL);
|
2015-03-29 21:00:04 +08:00
|
|
|
if (!table)
|
2008-01-22 22:08:36 +08:00
|
|
|
goto err_alloc;
|
|
|
|
|
2008-01-22 22:10:13 +08:00
|
|
|
table[0].data = &net->ipv6.frags.high_thresh;
|
2014-07-24 22:50:37 +08:00
|
|
|
table[0].extra1 = &net->ipv6.frags.low_thresh;
|
|
|
|
table[0].extra2 = &init_net.ipv6.frags.high_thresh;
|
2008-01-22 22:10:13 +08:00
|
|
|
table[1].data = &net->ipv6.frags.low_thresh;
|
2014-07-24 22:50:37 +08:00
|
|
|
table[1].extra2 = &net->ipv6.frags.high_thresh;
|
2008-01-22 22:09:37 +08:00
|
|
|
table[2].data = &net->ipv6.frags.timeout;
|
2008-01-22 22:08:36 +08:00
|
|
|
}
|
|
|
|
|
2012-04-19 21:44:49 +08:00
|
|
|
hdr = register_net_sysctl(net, "net/ipv6", table);
|
2015-03-29 21:00:04 +08:00
|
|
|
if (!hdr)
|
2008-01-22 22:08:36 +08:00
|
|
|
goto err_reg;
|
|
|
|
|
|
|
|
net->ipv6.sysctl.frags_hdr = hdr;
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_reg:
|
2009-11-26 07:14:13 +08:00
|
|
|
if (!net_eq(net, &init_net))
|
2008-01-22 22:08:36 +08:00
|
|
|
kfree(table);
|
|
|
|
err_alloc:
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2010-01-17 11:35:32 +08:00
|
|
|
static void __net_exit ip6_frags_ns_sysctl_unregister(struct net *net)
|
2008-01-22 22:08:36 +08:00
|
|
|
{
|
|
|
|
struct ctl_table *table;
|
|
|
|
|
|
|
|
table = net->ipv6.sysctl.frags_hdr->ctl_table_arg;
|
|
|
|
unregister_net_sysctl_table(net->ipv6.sysctl.frags_hdr);
|
2009-12-19 12:25:13 +08:00
|
|
|
if (!net_eq(net, &init_net))
|
|
|
|
kfree(table);
|
2008-01-22 21:58:31 +08:00
|
|
|
}
|
2008-05-20 04:53:02 +08:00
|
|
|
|
|
|
|
static struct ctl_table_header *ip6_ctl_header;
|
|
|
|
|
|
|
|
static int ip6_frags_sysctl_register(void)
|
|
|
|
{
|
2012-04-19 21:22:55 +08:00
|
|
|
ip6_ctl_header = register_net_sysctl(&init_net, "net/ipv6",
|
2008-05-20 04:53:02 +08:00
|
|
|
ip6_frags_ctl_table);
|
|
|
|
return ip6_ctl_header == NULL ? -ENOMEM : 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ip6_frags_sysctl_unregister(void)
|
|
|
|
{
|
|
|
|
unregister_net_sysctl_table(ip6_ctl_header);
|
|
|
|
}
|
2008-01-22 21:58:31 +08:00
|
|
|
#else
|
2014-10-29 18:38:17 +08:00
|
|
|
static int ip6_frags_ns_sysctl_register(struct net *net)
|
2008-01-10 18:56:03 +08:00
|
|
|
{
|
2008-01-22 21:58:31 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2008-01-22 22:08:36 +08:00
|
|
|
|
2014-10-29 18:38:17 +08:00
|
|
|
static void ip6_frags_ns_sysctl_unregister(struct net *net)
|
2008-01-22 22:08:36 +08:00
|
|
|
{
|
|
|
|
}
|
2008-05-20 04:53:02 +08:00
|
|
|
|
2014-10-29 18:38:17 +08:00
|
|
|
static int ip6_frags_sysctl_register(void)
|
2008-05-20 04:53:02 +08:00
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-10-29 18:38:17 +08:00
|
|
|
static void ip6_frags_sysctl_unregister(void)
|
2008-05-20 04:53:02 +08:00
|
|
|
{
|
|
|
|
}
|
2008-01-22 21:58:31 +08:00
|
|
|
#endif
|
2008-01-19 15:52:35 +08:00
|
|
|
|
2010-01-17 11:35:32 +08:00
|
|
|
static int __net_init ipv6_frags_init_net(struct net *net)
|
2008-01-22 21:58:31 +08:00
|
|
|
{
|
2018-04-01 03:58:43 +08:00
|
|
|
int res;
|
|
|
|
|
2010-01-20 17:42:41 +08:00
|
|
|
net->ipv6.frags.high_thresh = IPV6_FRAG_HIGH_THRESH;
|
|
|
|
net->ipv6.frags.low_thresh = IPV6_FRAG_LOW_THRESH;
|
2008-01-22 22:09:37 +08:00
|
|
|
net->ipv6.frags.timeout = IPV6_FRAG_TIMEOUT;
|
2018-04-01 03:58:44 +08:00
|
|
|
net->ipv6.frags.f = &ip6_frags;
|
2008-01-22 21:58:31 +08:00
|
|
|
|
2018-04-01 03:58:43 +08:00
|
|
|
res = inet_frags_init_net(&net->ipv6.frags);
|
|
|
|
if (res < 0)
|
|
|
|
return res;
|
2017-09-01 17:26:13 +08:00
|
|
|
|
2018-04-01 03:58:43 +08:00
|
|
|
res = ip6_frags_ns_sysctl_register(net);
|
|
|
|
if (res < 0)
|
2018-04-01 03:58:44 +08:00
|
|
|
inet_frags_exit_net(&net->ipv6.frags);
|
2018-04-01 03:58:43 +08:00
|
|
|
return res;
|
2008-01-10 18:56:03 +08:00
|
|
|
}
|
|
|
|
|
2010-01-17 11:35:32 +08:00
|
|
|
static void __net_exit ipv6_frags_exit_net(struct net *net)
|
2008-01-22 22:12:39 +08:00
|
|
|
{
|
2008-05-20 04:51:29 +08:00
|
|
|
ip6_frags_ns_sysctl_unregister(net);
|
2018-04-01 03:58:44 +08:00
|
|
|
inet_frags_exit_net(&net->ipv6.frags);
|
2008-01-22 22:12:39 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct pernet_operations ip6_frags_ops = {
|
|
|
|
.init = ipv6_frags_init_net,
|
|
|
|
.exit = ipv6_frags_exit_net,
|
|
|
|
};
|
|
|
|
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
static u32 ip6_key_hashfn(const void *data, u32 len, u32 seed)
|
|
|
|
{
|
|
|
|
return jhash2(data,
|
|
|
|
sizeof(struct frag_v6_compare_key) / sizeof(u32), seed);
|
|
|
|
}
|
|
|
|
|
|
|
|
static u32 ip6_obj_hashfn(const void *data, u32 len, u32 seed)
|
|
|
|
{
|
|
|
|
const struct inet_frag_queue *fq = data;
|
|
|
|
|
|
|
|
return jhash2((const u32 *)&fq->key.v6,
|
|
|
|
sizeof(struct frag_v6_compare_key) / sizeof(u32), seed);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ip6_obj_cmpfn(struct rhashtable_compare_arg *arg, const void *ptr)
|
|
|
|
{
|
|
|
|
const struct frag_v6_compare_key *key = arg->key;
|
|
|
|
const struct inet_frag_queue *fq = ptr;
|
|
|
|
|
|
|
|
return !!memcmp(&fq->key, key, sizeof(*key));
|
|
|
|
}
|
|
|
|
|
|
|
|
const struct rhashtable_params ip6_rhash_params = {
|
|
|
|
.head_offset = offsetof(struct inet_frag_queue, node),
|
|
|
|
.hashfn = ip6_key_hashfn,
|
|
|
|
.obj_hashfn = ip6_obj_hashfn,
|
|
|
|
.obj_cmpfn = ip6_obj_cmpfn,
|
|
|
|
.automatic_shrinking = true,
|
|
|
|
};
|
|
|
|
EXPORT_SYMBOL(ip6_rhash_params);
|
|
|
|
|
2007-12-11 18:24:29 +08:00
|
|
|
int __init ipv6_frag_init(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2007-12-11 18:24:29 +08:00
|
|
|
int ret;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-04-01 03:58:45 +08:00
|
|
|
ip6_frags.constructor = ip6_frag_init;
|
|
|
|
ip6_frags.destructor = NULL;
|
|
|
|
ip6_frags.qsize = sizeof(struct frag_queue);
|
|
|
|
ip6_frags.frag_expire = ip6_frag_expire;
|
|
|
|
ip6_frags.frags_cache_name = ip6_frag_cache_name;
|
inet: frags: use rhashtables for reassembly units
Some applications still rely on IP fragmentation, and to be fair linux
reassembly unit is not working under any serious load.
It uses static hash tables of 1024 buckets, and up to 128 items per bucket (!!!)
A work queue is supposed to garbage collect items when host is under memory
pressure, and doing a hash rebuild, changing seed used in hash computations.
This work queue blocks softirqs for up to 25 ms when doing a hash rebuild,
occurring every 5 seconds if host is under fire.
Then there is the problem of sharing this hash table for all netns.
It is time to switch to rhashtables, and allocate one of them per netns
to speedup netns dismantle, since this is a critical metric these days.
Lookup is now using RCU. A followup patch will even remove
the refcount hold/release left from prior implementation and save
a couple of atomic operations.
Before this patch, 16 cpus (16 RX queue NIC) could not handle more
than 1 Mpps frags DDOS.
After the patch, I reach 9 Mpps without any tuning, and can use up to 2GB
of storage for the fragments (exact number depends on frags being evicted
after timeout)
$ grep FRAG /proc/net/sockstat
FRAG: inuse 1966916 memory 2140004608
A followup patch will change the limits for 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Florian Westphal <fw@strlen.de>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Alexander Aring <alex.aring@gmail.com>
Cc: Stefan Schmidt <stefan@osg.samsung.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-04-01 03:58:49 +08:00
|
|
|
ip6_frags.rhash_params = ip6_rhash_params;
|
2018-04-01 03:58:45 +08:00
|
|
|
ret = inet_frags_init(&ip6_frags);
|
2007-12-11 18:24:29 +08:00
|
|
|
if (ret)
|
|
|
|
goto out;
|
2008-01-10 18:56:03 +08:00
|
|
|
|
2018-04-01 03:58:45 +08:00
|
|
|
ret = inet6_add_protocol(&frag_protocol, IPPROTO_FRAGMENT);
|
|
|
|
if (ret)
|
|
|
|
goto err_protocol;
|
|
|
|
|
2008-05-20 04:53:02 +08:00
|
|
|
ret = ip6_frags_sysctl_register();
|
|
|
|
if (ret)
|
|
|
|
goto err_sysctl;
|
|
|
|
|
2008-05-20 04:52:28 +08:00
|
|
|
ret = register_pernet_subsys(&ip6_frags_ops);
|
|
|
|
if (ret)
|
|
|
|
goto err_pernet;
|
2008-01-22 21:58:31 +08:00
|
|
|
|
2007-12-11 18:24:29 +08:00
|
|
|
out:
|
|
|
|
return ret;
|
2008-05-20 04:52:28 +08:00
|
|
|
|
|
|
|
err_pernet:
|
2008-05-20 04:53:02 +08:00
|
|
|
ip6_frags_sysctl_unregister();
|
|
|
|
err_sysctl:
|
2008-05-20 04:52:28 +08:00
|
|
|
inet6_del_protocol(&frag_protocol, IPPROTO_FRAGMENT);
|
2018-04-01 03:58:45 +08:00
|
|
|
err_protocol:
|
|
|
|
inet_frags_fini(&ip6_frags);
|
2008-05-20 04:52:28 +08:00
|
|
|
goto out;
|
2007-12-11 18:24:29 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void ipv6_frag_exit(void)
|
|
|
|
{
|
|
|
|
inet_frags_fini(&ip6_frags);
|
2008-05-20 04:53:02 +08:00
|
|
|
ip6_frags_sysctl_unregister();
|
2008-01-22 22:12:39 +08:00
|
|
|
unregister_pernet_subsys(&ip6_frags_ops);
|
2007-12-11 18:24:29 +08:00
|
|
|
inet6_del_protocol(&frag_protocol, IPPROTO_FRAGMENT);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|