Commit Graph

22 Commits

Author SHA1 Message Date
Roland Dreier e727f5cde9 mlx4_core: Fix dma_sync_single_for_cpu() with matching for_device() calls
Commit 5d23a1d2 ("net: replace dma_sync_single with
dma_sync_single_for_cpu") replaced uses of the deprectated function
dma_sync_single() with calls to dma_sync_single_for_cpu().  However,
to be correct, the code should do a sync for_cpu() before touching the
memory and for_device() after it's done.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2009-06-22 23:07:56 -07:00
David S. Miller 9cbc1cb8cd Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/torvalds/linux-2.6
Conflicts:
	Documentation/feature-removal-schedule.txt
	drivers/scsi/fcoe/fcoe.c
	net/core/drop_monitor.c
	net/core/net-traces.c
2009-06-15 03:02:23 -07:00
FUJITA Tomonori 5d23a1d2a3 net: replace dma_sync_single with dma_sync_single_for_cpu
This replaces dma_sync_single() with dma_sync_single_for_cpu() because
dma_sync_single() is an obsolete API; include/linux/dma-mapping.h says:

/* Backwards compat, remove in 2.7.x */
#define dma_sync_single		dma_sync_single_for_cpu
#define dma_sync_sg		dma_sync_sg_for_cpu

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-05-29 01:51:22 -07:00
Eli Cohen ab6bf42e23 mlx4_core: Add module parameter for number of MTTs per segment
The current MTT allocator uses kmalloc() to allocate a buffer for its
buddy allocator, and thus is limited in the amount of MTT segments
that it can control.  As a result, the size of memory that can be
registered is limited too.  This patch uses a module parameter to
control the number of MTT entries that each segment represents,
allowing more memory to be registered with the same number of
segments.

Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2009-05-27 14:38:34 -07:00
Yevgeny Petrilin 93fc9e1bb6 mlx4_core: Support multiple pre-reserved QP regions
For ethernet support, we need to reserve QPs for the ethernet and
fibre channel driver.  The QPs are reserved at the end of the QP
table.  (This way we assure that they are aligned to their size)

We need to consider these reserved ranges in bitmap creation, so we
extend the mlx4 bitmap utility functions to allow reserved ranges at
both the bottom and the top of the range.

Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-10-22 10:25:29 -07:00
Vladimir Sokolovsky 29bdc88384 IB/mlx4: Fix up fast register page list format
Byte swap the addresses in the page list for fast register work requests
to big endian to match what the HCA expectx.  Also, the addresses must
have the "present" bit set so that the HCA knows it can access them.
Otherwise the HCA will fault the first time it accesses the memory
region.

Signed-off-by: Vladimir Sokolovsky <vlad@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-09-15 14:25:23 -07:00
Vladimir Sokolovsky c9257433f2 mlx4_core: Set RAE and init mtt_sz field in FRMR MPT entries
Set the RAE (remote access enable) bit and correctly initialize the
MTT size in MPT entries being set up for fast register memory
regions.  Otherwise the callers can't enable remote access and in fact
can't fast register at all (since the HCA will think no MTT entries
are allocated).

Signed-off-by: Vladimir Sokolovsky <vlad@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-09-02 13:38:29 -07:00
Jack Morgenstein 51a379d0c8 mlx4: Update/add Mellanox Technologies copyright lines to mlx4 driver files
Update existing Mellanox copyright lines to 2008, and add such lines
to files where they are missing.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-07-25 10:32:52 -07:00
Roland Dreier 95d04f0735 IB/mlx4: Add support for memory management extensions and local DMA L_Key
Add support for the following operations to mlx4 when device firmware
supports them:

 - Send with invalidate and local invalidate send queue work requests;
 - Allocate/free fast register MRs;
 - Allocate/free fast register MR page lists;
 - Fast register MR send queue work requests;
 - Local DMA L_Key.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-07-23 08:12:26 -07:00
Roland Dreier e4044cfc49 mlx4_core: Keep free count for MTT buddy allocator
MTT entries are allocated with a buddy allocator, which just keeps
bitmaps for each level of the buddy table.  However, all free space
starts out at the highest order, and small allocations start scanning
from the lowest order.  When the lowest order tables have no free
space, this can lead to scanning potentially millions of bits before
finding a free entry at a higher order.

We can avoid this by just keeping a count of how many free entries
each order has, and skipping the bitmap scan when an order is
completely empty.  This provides a nice performance boost for a
negligible increase in memory usage.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-07-22 14:19:40 -07:00
Oren Duer c5057ddccb mlx4_core: Support creation of FMRs with pages smaller than 4K
Don't hard code a test against a minimum page shift of 12, since the
device may support smaller pages.  Test against the actual smallest
page size from the device capabilities.

Signed-off-by: Oren Duer <oren@mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-05-05 15:56:52 -07:00
Olaf Kirch bbdc2821db mlx4_core: Avoid recycling old FMR R_Keys too soon
When a FMR is unmapped, mlx4 resets the map count to 0, and clears the
upper part of the R_Key which is used as the sequence counter.

This poses a problem for RDS, which uses ib_fmr_unmap as a fence
operation.  RDS assumes that after issuing an unmap, the old R_Keys
will be invalid for a "reasonable" period of time. For instance,
Oracle processes uses shared memory buffers allocated from a pool of
buffers.  When a process dies, we want to reclaim these buffers -- but
we must make sure there are no pending RDMA operations to/from those
buffers.  The only way to achieve that is by using unmap and sync the
TPT.

However, when the sequence count is reset on unmap, there is a high
likelihood that a new mapping will be given the same R_Key that was
issued a few milliseconds ago.

To prevent this, don't reset the sequence count when unmapping a FMR.

Signed-off-by: Olaf Kirch <olaf.kirch@oracle.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-04-29 13:46:53 -07:00
Jack Morgenstein 11e75a7455 mlx4_core: Move table_find from fmr_alloc to fmr_enable
mlx4_table_find (for FMR MPTs) requires that ICM memory already be
mapped.  Before this fix, FMR allocation depended on ICM memory
already being mapped for the MPT entry.  If all currently mapped
entries are taken, the find operation fails (even if the MPT ICM table
still had more entries, which were just not mapped yet).

This fix moves the mpt find operation to fmr_enable, to guarantee that
any required ICM memory mapping has already occurred.

Found by Oren Duer of Mellanox.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-02-14 10:43:48 -08:00
Roland Dreier b57aacfa7a mlx4_core: Clean up struct mlx4_buf
Now that struct mlx4_buf.u is a struct instead of a union because of
the vmap() changes, there's no point in having a struct at all.  So
move .direct and .page_list directly into struct mlx4_buf and get rid
of a bunch of unnecessary ".u"s.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-02-06 21:17:59 -08:00
Roland Dreier e8f9b2ed98 mlx4_core: Fix more section mismatches
Commit 3d73c288 ("mlx4_core: Fix section mismatches") fixed some of
the section mismatches introduced when error recovery was added, but
there were still more cases of errory recovery code calling into
__devinit code from regular .text.  Fix this by getting rid of the
now-incorrect __devinit annotations.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-02-04 20:20:41 -08:00
Roland Dreier 3d73c2884f mlx4_core: Fix section mismatches
Commit ee49bd93 ("mlx4_core: Reset device when internal error is
detected") introduced some section mismatch problems when
CONFIG_HOTPLUG=n, because the error recovery code tears down and
reinitializes the device after everything is loaded, which ends up
calling into lots of code marked __devinit and __devexit from regular
.text.  Fix this by getting rid of these now-incorrect section
markers.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-10-10 15:43:54 -07:00
Jack Morgenstein 8ad11fb6b0 IB/mlx4: Implement FMRs
Implement FMRs for mlx4.  This is an adaptation of code from mthca.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-10-09 19:59:16 -07:00
Jack Morgenstein d7bb58fb1c mlx4_core: Write MTTs from CPU instead with of WRITE_MTT FW command
Write MTT entries directly to ICM from the driver (eliminating use of
WRITE_MTT command).  This reduces the number of FW commands needed to
register an MR by at least a factor of 2 and speeds up memory
registration significantly.  This code will also be used to implement
FMRs.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-10-09 19:59:16 -07:00
Roland Dreier cf78237d7b mlx4_core: Reserve the correct number of MTT segments
Taking ilog2(dev->caps.reserved_mtts) to find out the order to pass to
the MTT buddy allocator will do the wrong thing if reserved_mtts is ever
not a power of 2.  Be safe and use fls(dev->caps.reserved_mtts - 1).

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-10-09 19:59:16 -07:00
Jack Morgenstein 0172e2e14c mlx4_core: Remove kfree() in mlx4_mr_alloc() error flow
mlx4_mr_alloc() doesn't actually allocate mr (it just initializes the
pointer that the caller passes in), so it shouldn't free it if an
error occurs.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-28 08:30:45 -07:00
Jack Morgenstein b2d9308ae4 mlx4_core: Don't set MTT address in dMPT entries with PA set
If a dMPT entry has the PA flag (direct physical address) set, then
the (unused) MTT base address field has to be set to 0.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-06-07 23:24:38 -07:00
Roland Dreier 225c7b1fee IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters
Add an InfiniBand driver for Mellanox ConnectX adapters.  Because
these adapters can also be used as ethernet NICs and Fibre Channel 
HBAs, the driver is split into two modules: 
 
  mlx4_core: Handles low-level things like device initialization and 
    processing firmware commands.  Also controls resource allocation 
    so that the InfiniBand, ethernet and FC functions can share a 
    device without stepping on each other. 
 
  mlx4_ib: Handles InfiniBand-specific things; plugs into the 
    InfiniBand midlayer. 

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-08 18:00:38 -07:00