ibmveth: Fix off-by-one error in ibmveth_change_mtu()
AFAIK the PAPR document which defines the virtual device interface used by the ibmveth driver doesn't specify a specific maximum MTU. So, in the ibmveth driver, the maximum allowed MTU is determined by the maximum allocated buffer size of 64k (corresponding to one page in the common case) minus the per-buffer overhead IBMVETH_BUFF_OH (which has value 22 for 14 bytes of ethernet header, plus 8 bytes for an opaque handle). This suggests a maximum allowable MTU of 65514 bytes, but in fact the driver only permits a maximum MTU of 65513. This is because there is a < instead of an <= in ibmveth_change_mtu(), which only permits an MTU which is strictly smaller than the buffer size, rather than allowing the buffer to be completely filled. This patch fixes the buglet. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
ec65aafb9e
commit
4fce14820c
|
@ -1238,7 +1238,7 @@ static int ibmveth_change_mtu(struct net_device *dev, int new_mtu)
|
|||
return -EINVAL;
|
||||
|
||||
for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
|
||||
if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size)
|
||||
if (new_mtu_oh <= adapter->rx_buff_pool[i].buff_size)
|
||||
break;
|
||||
|
||||
if (i == IBMVETH_NUM_BUFF_POOLS)
|
||||
|
@ -1257,7 +1257,7 @@ static int ibmveth_change_mtu(struct net_device *dev, int new_mtu)
|
|||
for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
|
||||
adapter->rx_buff_pool[i].active = 1;
|
||||
|
||||
if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size) {
|
||||
if (new_mtu_oh <= adapter->rx_buff_pool[i].buff_size) {
|
||||
dev->mtu = new_mtu;
|
||||
vio_cmo_set_dev_desired(viodev,
|
||||
ibmveth_get_desired_dma
|
||||
|
|
Loading…
Reference in New Issue