Commit Graph

1484 Commits

Author SHA1 Message Date
Sage Weil 8fc57da4d3 ceph: ignore trailing data in monamp
This lets us extend the format more easily.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-12 10:29:24 -07:00
Sage Weil 752727a1b2 ceph: add file layout validation
This tracks updates to code shared with userspace.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-09 16:39:30 -07:00
Sage Weil 13e38c8ae7 ceph: update to mon client protocol v15
The mon request headers now include session_mon information that must
be properly initialized.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-09 16:39:27 -07:00
Sage Weil 266673db42 ceph: cancel osd requests before resending them
This ensures we don't submit the same request twice if we are kicking a
specific osd (as with an osd_reset), or when we hit a transient error and
resend.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-09 11:58:20 -07:00
Sage Weil 81b024e70f ceph: reset osd session on fault, not peer_reset
The peer_reset just takes longer (until we reconnect and discover the osd
dropped the session... which it will).

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-09 11:58:15 -07:00
Sage Weil 991abb6ecf ceph: fail gracefully on corrupt osdmap (bad pg_temp mapping)
Return an error and report a corrupt map instead of crying BUG().

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-09 11:58:11 -07:00
Sage Weil 0ba6478df7 ceph: revoke osd request message on request completion
If an osd has failed or returned and a request has been sent twice, it's
possible to get a reply and unregister the request while the request
message is queued for delivery.  Since the message references the caller's
page vector, we need to revoke it before completing.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-09 11:58:07 -07:00
Sage Weil c1ea8823be ceph: fix osd request submission race
The osd request submission path registers the request, drops and retakes
the request_mutex, then sends it to the OSD.  A racing kick_requests could
sent it during that interval, causing the same msg to be sent twice and
BUGing in the msgr.

Fix by only sending the message if it hasn't been touched by other
threads.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-09 11:58:03 -07:00
Sage Weil 0656d11ba6 ceph: renew mon subscription before it expires
Be conservative: renew subscription once half the interval has expired.

Do not reuse sub expiration to control hunting.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-08 10:39:08 -07:00
Sage Weil e251e28808 ceph: fix mdsmap decoding when multiple mds's are present
A misplaced sizeof() around namelen was throwing things off.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-07 16:38:55 -07:00
Sage Weil b28813a61d ceph: gracefully avoid empty crush buckets
This avoids a divide by zero when the input and/or map are
malformed.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-07 10:59:34 -07:00
Sage Weil b195befd9a ceph: include preferred_osd in file layout virtual xattr
Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-07 10:59:30 -07:00
Sage Weil fa0b72e9e2 ceph: show meaningful version on module load
Kill the old git revision; print the ceph version and protocol
versions instead.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-07 10:59:10 -07:00
Sage Weil e324b8f991 ceph: document shared files in README
Document files shared between kernel and user code trees.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 12:21:17 -07:00
Sage Weil 9030aaf9bf ceph: Kconfig, Makefile
Kconfig options and Makefile.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:15 -07:00
Sage Weil 76aa844d5b ceph: debugfs
Basic state information is available via /sys/kernel/debug/ceph,
including instances of the client, fsids, current monitor, mds and osd
maps, outstanding server requests, and hooks to adjust debug levels.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:14 -07:00
Sage Weil 8f4e91dee2 ceph: ioctls
A few Ceph ioctls for getting and setting file layout (striping)
parameters, and learning the identity and network address of the OSD a
given region of a file is stored on.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:14 -07:00
Sage Weil a8e63b7d51 ceph: nfs re-export support
Basic NFS re-export support is included.  This mostly works.  However,
Ceph's MDS design precludes the ability to generate a (small)
filehandle that will be valid forever, so this is of limited utility.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:13 -07:00
Sage Weil 8fc91fd859 ceph: message pools
The msgpool is a basic mempool_t-like structure to preallocate
messages we expect to receive over the wire.  This ensures we have the
necessary memory preallocated to process replies to requests, or to
process unsolicited messages from various servers.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:13 -07:00
Sage Weil 31b8006e1d ceph: messenger library
A generic message passing library is used to communicate with all
other components in the Ceph file system.  The messenger library
provides ordered, reliable delivery of messages between two nodes in
the system.

This implementation is based on TCP.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:13 -07:00
Sage Weil 963b61eb04 ceph: snapshot management
Ceph snapshots rely on client cooperation in determining which
operations apply to which snapshots, and appropriately flushing
snapshotted data and metadata back to the OSD and MDS clusters.
Because snapshots apply to subtrees of the file hierarchy and can be
created at any time, there is a fair bit of bookkeeping required to
make this work.

Portions of the hierarchy that belong to the same set of snapshots
are described by a single 'snap realm.'  A 'snap context' describes
the set of snapshots that exist for a given file or directory.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:12 -07:00
Sage Weil a8599bd821 ceph: capability management
The Ceph metadata servers control client access to inode metadata and
file data by issuing capabilities, granting clients permission to read
and/or write both inode field and file data to OSDs (storage nodes).
Each capability consists of a set of bits indicating which operations
are allowed.

If the client holds a *_SHARED cap, the client has a coherent value
that can be safely read from the cached inode.

In the case of a *_EXCL (exclusive) or FILE_WR capabilities, the client
is allowed to change inode attributes (e.g., file size, mtime), note
its dirty state in the ceph_cap, and asynchronously flush that
metadata change to the MDS.

In the event of a conflicting operation (perhaps by another client),
the MDS will revoke the conflicting client capabilities.

In order for a client to cache an inode, it must hold a capability
with at least one MDS server.  When inodes are released, release
notifications are batched and periodically sent en masse to the MDS
cluster to release server state.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:12 -07:00
Sage Weil ba75bb98cf ceph: monitor client
The monitor cluster is responsible for managing cluster membership
and state.  The monitor client handles what minimal interaction
the Ceph client has with it: checking for updated versions of the
MDS and OSD maps, getting statfs() information, and unmounting.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:11 -07:00
Sage Weil 5ecc0a0f81 ceph: CRUSH mapping algorithm
CRUSH is a pseudorandom data distribution function designed to map
inputs onto a dynamic hierarchy of devices, while minimizing the
extent to which inputs are remapped when the devices are added or
removed.  It includes some features that are specifically useful for
storage, most notably the ability to map each input onto a set of N
devices that are separated across administrator-defined failure
domains.  CRUSH is used to distribute data across the cluster of Ceph
storage nodes.

More information about CRUSH can be found in this paper:

    http://www.ssrc.ucsc.edu/Papers/weil-sc06.pdf

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:11 -07:00
Sage Weil f24e9980eb ceph: OSD client
The OSD client is responsible for reading and writing data from/to the
object storage pool.  This includes determining where objects are
stored in the cluster, and ensuring that requests are retried or
redirected in the event of a node failure or data migration.

If an OSD does not respond before a timeout expires, keepalive
messages are sent across the lossless, ordered communications channel
to ensure that any break in the TCP is discovered.  If the session
does reset, a reconnection is attempted and affected requests are
resent (by the message transport layer).

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:10 -07:00
Sage Weil 2f2dc05340 ceph: MDS client
The MDS (metadata server) client is responsible for submitting
requests to the MDS cluster and parsing the response.  We decide which
MDS to submit each request to based on cached information about the
current partition of the directory hierarchy across the cluster.  A
stateful session is opened with each MDS before we submit requests to
it, and a mutex is used to control the ordering of messages within
each session.

An MDS request may generate two responses.  The first indicates the
operation was a success and returns any result.  A second reply is
sent when the operation commits to disk.  Note that locking on the MDS
ensures that the results of updates are visible only to the updating
client before the operation commits.  Requests are linked to the
containing directory so that an fsync will wait for them to commit.

If an MDS fails and/or recovers, we resubmit requests as needed.  We
also reconnect existing capabilities to a recovering MDS to
reestablish that shared session state.  Old dentry leases are
invalidated.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:09 -07:00
Sage Weil 1d3576fd10 ceph: address space operations
The ceph address space methods are concerned primarily with managing
the dirty page accounting in the inode, which (among other things)
must keep track of which snapshot context each page was dirtied in,
and ensure that dirty data is written out to the OSDs in snapshort
order.

A writepage() on a page that is not currently writeable due to
snapshot writeback ordering constraints is ignored (it was presumably
called from kswapd).

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:09 -07:00
Sage Weil 124e68e740 ceph: file operations
File open and close operations, and read and write methods that ensure
we have obtained the proper capabilities from the MDS cluster before
performing IO on a file.  We take references on held capabilities for
the duration of the read/write to avoid prematurely releasing them
back to the MDS.

We implement two main paths for read and write: one that is buffered
(and uses generic_aio_{read,write}), and one that is fully synchronous
and blocking (operating either on a __user pointer or, if O_DIRECT,
directly on user pages).

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:08 -07:00
Sage Weil 2817b000b0 ceph: directory operations
Directory operations, including lookup, are defined here.  We take
advantage of lookup intents when possible.  For the most part, we just
need to build the proper requests for the metadata server(s) and
pass things off to the mds_client.

The results of most operations are normally incorporated into the
client's cache when the reply is parsed by ceph_fill_trace().
However, if the MDS replies without a trace (e.g., when retrying an
update after an MDS failure recovery), some operation-specific cleanup
may be needed.

We can validate cached dentries in two ways.  A per-dentry lease may
be issued by the MDS, or a per-directory cap may be issued that acts
as a lease on the entire directory.  In the latter case, a 'gen' value
is used to determine which dentries belong to the currently leased
directory contents.

We normally prepopulate the dcache and icache with readdir results.
This makes subsequent lookups and getattrs avoid any server
interaction.  It also lets us satisfy readdir operation by peeking at
the dcache IFF we hold the per-directory cap/lease, previously
performed a readdir, and haven't dropped any of the resulting
dentries.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:08 -07:00
Sage Weil 355da1eb7a ceph: inode operations
Inode cache and inode operations.  We also include routines to
incorporate metadata structures returned by the MDS into the client
cache, and some helpers to deal with file capabilities and metadata
leases.  The bulk of that work is done by fill_inode() and
fill_trace().

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:08 -07:00
Sage Weil 16725b9d2a ceph: super.c
Mount option parsing, client setup and teardown, and a few odds and
ends (e.g., statfs).

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:07 -07:00
Sage Weil c30dbb9cc7 ceph: ref counted buffer
struct ceph_buffer is a simple ref-counted buffer.  We transparently
choose between kmalloc for small buffers and vmalloc for large ones.

This is currently used only for allocating memory for xattr data.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:07 -07:00
Sage Weil de57606c23 ceph: client types
We first define constants, types, and prototypes for the kernel client
proper.

A few subsystems are defined separately later: the MDS, OSD, and
monitor clients, and the messaging layer.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:07 -07:00
Sage Weil 0dee3c28af ceph: on-wire types
These headers describe the types used to exchange messages between the
Ceph client and various servers.  All types are little-endian and
packed.  These headers are shared between the kernel and userspace, so
all types are in terms of e.g. __u32.

Additionally, we define a few magic values to identify the current
version of the protocol(s) in use, so that discrepancies to be
detected on mount.

Signed-off-by: Sage Weil <sage@newdream.net>
2009-10-06 11:31:06 -07:00