linux/fs/xfs/xfs_attr_remote.c

628 lines
15 KiB
C
Raw Normal View History

/*
* Copyright (c) 2000-2005 Silicon Graphics, Inc.
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
* Copyright (c) 2013 Red Hat, Inc.
* All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it would be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "xfs.h"
#include "xfs_fs.h"
#include "xfs_log_format.h"
#include "xfs_trans_resv.h"
#include "xfs_bit.h"
#include "xfs_sb.h"
#include "xfs_ag.h"
#include "xfs_mount.h"
#include "xfs_da_format.h"
#include "xfs_error.h"
#include "xfs_da_btree.h"
#include "xfs_bmap_btree.h"
#include "xfs_dinode.h"
#include "xfs_inode.h"
#include "xfs_alloc.h"
#include "xfs_trans.h"
#include "xfs_inode_item.h"
#include "xfs_bmap.h"
#include "xfs_bmap_util.h"
#include "xfs_attr.h"
#include "xfs_attr_leaf.h"
#include "xfs_attr_remote.h"
#include "xfs_trans_space.h"
#include "xfs_trace.h"
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
#include "xfs_cksum.h"
#include "xfs_buf_item.h"
#define ATTR_RMTVALUE_MAPSIZE 1 /* # of map entries at once */
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
/*
* Each contiguous block has a header, so it is not just a simple attribute
* length to FSB conversion.
*/
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
int
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs_attr3_rmt_blocks(
struct xfs_mount *mp,
int attrlen)
{
if (xfs_sb_version_hascrc(&mp->m_sb)) {
int buflen = XFS_ATTR3_RMT_BUF_SPACE(mp, mp->m_sb.sb_blocksize);
return (attrlen + buflen - 1) / buflen;
}
return XFS_B_TO_FSB(mp, attrlen);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
}
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
/*
* Checking of the remote attribute header is split into two parts. The verifier
* does CRC, location and bounds checking, the unpacking function checks the
* attribute parameters and owner.
*/
static bool
xfs_attr3_rmt_hdr_ok(
struct xfs_mount *mp,
void *ptr,
xfs_ino_t ino,
uint32_t offset,
uint32_t size,
xfs_daddr_t bno)
{
struct xfs_attr3_rmt_hdr *rmt = ptr;
if (bno != be64_to_cpu(rmt->rm_blkno))
return false;
if (offset != be32_to_cpu(rmt->rm_offset))
return false;
if (size != be32_to_cpu(rmt->rm_bytes))
return false;
if (ino != be64_to_cpu(rmt->rm_owner))
return false;
/* ok */
return true;
}
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
static bool
xfs_attr3_rmt_verify(
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
struct xfs_mount *mp,
void *ptr,
int fsbsize,
xfs_daddr_t bno)
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
{
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
struct xfs_attr3_rmt_hdr *rmt = ptr;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
if (!xfs_sb_version_hascrc(&mp->m_sb))
return false;
if (rmt->rm_magic != cpu_to_be32(XFS_ATTR3_RMT_MAGIC))
return false;
if (!uuid_equal(&rmt->rm_uuid, &mp->m_sb.sb_uuid))
return false;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
if (be64_to_cpu(rmt->rm_blkno) != bno)
return false;
if (be32_to_cpu(rmt->rm_bytes) > fsbsize - sizeof(*rmt))
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
return false;
if (be32_to_cpu(rmt->rm_offset) +
be32_to_cpu(rmt->rm_bytes) >= XATTR_SIZE_MAX)
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
return false;
if (rmt->rm_owner == 0)
return false;
return true;
}
static void
xfs_attr3_rmt_read_verify(
struct xfs_buf *bp)
{
struct xfs_mount *mp = bp->b_target->bt_mount;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
char *ptr;
int len;
bool corrupt = false;
xfs_daddr_t bno;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
/* no verification of non-crc buffers */
if (!xfs_sb_version_hascrc(&mp->m_sb))
return;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
ptr = bp->b_addr;
bno = bp->b_bn;
len = BBTOB(bp->b_length);
ASSERT(len >= XFS_LBSIZE(mp));
while (len > 0) {
if (!xfs_verify_cksum(ptr, XFS_LBSIZE(mp),
XFS_ATTR3_RMT_CRC_OFF)) {
corrupt = true;
break;
}
if (!xfs_attr3_rmt_verify(mp, ptr, XFS_LBSIZE(mp), bno)) {
corrupt = true;
break;
}
len -= XFS_LBSIZE(mp);
ptr += XFS_LBSIZE(mp);
bno += mp->m_bsize;
}
if (corrupt) {
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp, bp->b_addr);
xfs_buf_ioerror(bp, EFSCORRUPTED);
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
} else
ASSERT(len == 0);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
}
static void
xfs_attr3_rmt_write_verify(
struct xfs_buf *bp)
{
struct xfs_mount *mp = bp->b_target->bt_mount;
struct xfs_buf_log_item *bip = bp->b_fspriv;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
char *ptr;
int len;
xfs_daddr_t bno;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
/* no verification of non-crc buffers */
if (!xfs_sb_version_hascrc(&mp->m_sb))
return;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
ptr = bp->b_addr;
bno = bp->b_bn;
len = BBTOB(bp->b_length);
ASSERT(len >= XFS_LBSIZE(mp));
while (len > 0) {
if (!xfs_attr3_rmt_verify(mp, ptr, XFS_LBSIZE(mp), bno)) {
XFS_CORRUPTION_ERROR(__func__,
XFS_ERRLEVEL_LOW, mp, bp->b_addr);
xfs_buf_ioerror(bp, EFSCORRUPTED);
return;
}
if (bip) {
struct xfs_attr3_rmt_hdr *rmt;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
rmt = (struct xfs_attr3_rmt_hdr *)ptr;
rmt->rm_lsn = cpu_to_be64(bip->bli_item.li_lsn);
}
xfs_update_cksum(ptr, XFS_LBSIZE(mp), XFS_ATTR3_RMT_CRC_OFF);
len -= XFS_LBSIZE(mp);
ptr += XFS_LBSIZE(mp);
bno += mp->m_bsize;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
}
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
ASSERT(len == 0);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
}
const struct xfs_buf_ops xfs_attr3_rmt_buf_ops = {
.verify_read = xfs_attr3_rmt_read_verify,
.verify_write = xfs_attr3_rmt_write_verify,
};
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
STATIC int
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs_attr3_rmt_hdr_set(
struct xfs_mount *mp,
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
void *ptr,
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs_ino_t ino,
uint32_t offset,
uint32_t size,
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
xfs_daddr_t bno)
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
{
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
struct xfs_attr3_rmt_hdr *rmt = ptr;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
if (!xfs_sb_version_hascrc(&mp->m_sb))
return 0;
rmt->rm_magic = cpu_to_be32(XFS_ATTR3_RMT_MAGIC);
rmt->rm_offset = cpu_to_be32(offset);
rmt->rm_bytes = cpu_to_be32(size);
uuid_copy(&rmt->rm_uuid, &mp->m_sb.sb_uuid);
rmt->rm_owner = cpu_to_be64(ino);
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
rmt->rm_blkno = cpu_to_be64(bno);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
return sizeof(struct xfs_attr3_rmt_hdr);
}
/*
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
* Helper functions to copy attribute data in and out of the one disk extents
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
*/
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
STATIC int
xfs_attr_rmtval_copyout(
struct xfs_mount *mp,
struct xfs_buf *bp,
xfs_ino_t ino,
int *offset,
int *valuelen,
__uint8_t **dst)
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
{
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
char *src = bp->b_addr;
xfs_daddr_t bno = bp->b_bn;
int len = BBTOB(bp->b_length);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
ASSERT(len >= XFS_LBSIZE(mp));
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
while (len > 0 && *valuelen > 0) {
int hdr_size = 0;
int byte_cnt = XFS_ATTR3_RMT_BUF_SPACE(mp, XFS_LBSIZE(mp));
byte_cnt = min(*valuelen, byte_cnt);
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
if (xfs_sb_version_hascrc(&mp->m_sb)) {
if (!xfs_attr3_rmt_hdr_ok(mp, src, ino, *offset,
byte_cnt, bno)) {
xfs_alert(mp,
"remote attribute header mismatch bno/off/len/owner (0x%llx/0x%x/Ox%x/0x%llx)",
bno, *offset, byte_cnt, ino);
return EFSCORRUPTED;
}
hdr_size = sizeof(struct xfs_attr3_rmt_hdr);
}
memcpy(*dst, src + hdr_size, byte_cnt);
/* roll buffer forwards */
len -= XFS_LBSIZE(mp);
src += XFS_LBSIZE(mp);
bno += mp->m_bsize;
/* roll attribute data forwards */
*valuelen -= byte_cnt;
*dst += byte_cnt;
*offset += byte_cnt;
}
return 0;
}
STATIC void
xfs_attr_rmtval_copyin(
struct xfs_mount *mp,
struct xfs_buf *bp,
xfs_ino_t ino,
int *offset,
int *valuelen,
__uint8_t **src)
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
{
char *dst = bp->b_addr;
xfs_daddr_t bno = bp->b_bn;
int len = BBTOB(bp->b_length);
ASSERT(len >= XFS_LBSIZE(mp));
while (len > 0 && *valuelen > 0) {
int hdr_size;
int byte_cnt = XFS_ATTR3_RMT_BUF_SPACE(mp, XFS_LBSIZE(mp));
byte_cnt = min(*valuelen, byte_cnt);
hdr_size = xfs_attr3_rmt_hdr_set(mp, dst, ino, *offset,
byte_cnt, bno);
memcpy(dst + hdr_size, *src, byte_cnt);
/*
* If this is the last block, zero the remainder of it.
* Check that we are actually the last block, too.
*/
if (byte_cnt + hdr_size < XFS_LBSIZE(mp)) {
ASSERT(*valuelen - byte_cnt == 0);
ASSERT(len == XFS_LBSIZE(mp));
memset(dst + hdr_size + byte_cnt, 0,
XFS_LBSIZE(mp) - hdr_size - byte_cnt);
}
/* roll buffer forwards */
len -= XFS_LBSIZE(mp);
dst += XFS_LBSIZE(mp);
bno += mp->m_bsize;
/* roll attribute data forwards */
*valuelen -= byte_cnt;
*src += byte_cnt;
*offset += byte_cnt;
}
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
}
/*
* Read the value associated with an attribute from the out-of-line buffer
* that we stored it in.
*/
int
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs_attr_rmtval_get(
struct xfs_da_args *args)
{
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
struct xfs_bmbt_irec map[ATTR_RMTVALUE_MAPSIZE];
struct xfs_mount *mp = args->dp->i_mount;
struct xfs_buf *bp;
xfs_dablk_t lblkno = args->rmtblkno;
__uint8_t *dst = args->value;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
int valuelen = args->valuelen;
int nmap;
int error;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
int blkcnt = args->rmtblkcnt;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
int i;
int offset = 0;
trace_xfs_attr_rmtval_get(args);
ASSERT(!(args->flags & ATTR_KERNOVAL));
while (valuelen > 0) {
nmap = ATTR_RMTVALUE_MAPSIZE;
error = xfs_bmapi_read(args->dp, (xfs_fileoff_t)lblkno,
blkcnt, map, &nmap,
XFS_BMAPI_ATTRFORK);
if (error)
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
return error;
ASSERT(nmap >= 1);
for (i = 0; (i < nmap) && (valuelen > 0); i++) {
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
xfs_daddr_t dblkno;
int dblkcnt;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
ASSERT((map[i].br_startblock != DELAYSTARTBLOCK) &&
(map[i].br_startblock != HOLESTARTBLOCK));
dblkno = XFS_FSB_TO_DADDR(mp, map[i].br_startblock);
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
dblkcnt = XFS_FSB_TO_BB(mp, map[i].br_blockcount);
error = xfs_trans_read_buf(mp, NULL, mp->m_ddev_targp,
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
dblkno, dblkcnt, 0, &bp,
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
&xfs_attr3_rmt_buf_ops);
if (error)
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
return error;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
error = xfs_attr_rmtval_copyout(mp, bp, args->dp->i_ino,
&offset, &valuelen,
&dst);
xfs_buf_relse(bp);
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
if (error)
return error;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
/* roll attribute extent map forwards */
lblkno += map[i].br_blockcount;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
blkcnt -= map[i].br_blockcount;
}
}
ASSERT(valuelen == 0);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
return 0;
}
/*
* Write the value associated with an attribute into the out-of-line buffer
* that we have defined for it.
*/
int
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs_attr_rmtval_set(
struct xfs_da_args *args)
{
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
struct xfs_inode *dp = args->dp;
struct xfs_mount *mp = dp->i_mount;
struct xfs_bmbt_irec map;
xfs_dablk_t lblkno;
xfs_fileoff_t lfileoff = 0;
__uint8_t *src = args->value;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
int blkcnt;
int valuelen;
int nmap;
int error;
int offset = 0;
trace_xfs_attr_rmtval_set(args);
/*
* Find a "hole" in the attribute address space large enough for
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
* us to drop the new attribute's value into. Because CRC enable
* attributes have headers, we can't just do a straight byte to FSB
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
* conversion and have to take the header space into account.
*/
blkcnt = xfs_attr3_rmt_blocks(mp, args->valuelen);
error = xfs_bmap_first_unused(args->trans, args->dp, blkcnt, &lfileoff,
XFS_ATTR_FORK);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
if (error)
return error;
args->rmtblkno = lblkno = (xfs_dablk_t)lfileoff;
args->rmtblkcnt = blkcnt;
/*
* Roll through the "value", allocating blocks on disk as required.
*/
while (blkcnt > 0) {
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
int committed;
/*
* Allocate a single extent, up to the size of the value.
*/
xfs_bmap_init(args->flist, args->firstblock);
nmap = 1;
error = xfs_bmapi_write(args->trans, dp, (xfs_fileoff_t)lblkno,
blkcnt,
XFS_BMAPI_ATTRFORK | XFS_BMAPI_METADATA,
args->firstblock, args->total, &map, &nmap,
args->flist);
if (!error) {
error = xfs_bmap_finish(&args->trans, args->flist,
&committed);
}
if (error) {
ASSERT(committed);
args->trans = NULL;
xfs_bmap_cancel(args->flist);
return(error);
}
/*
* bmap_finish() may have committed the last trans and started
* a new one. We need the inode to be in all transactions.
*/
if (committed)
xfs_trans_ijoin(args->trans, dp, 0);
ASSERT(nmap == 1);
ASSERT((map.br_startblock != DELAYSTARTBLOCK) &&
(map.br_startblock != HOLESTARTBLOCK));
lblkno += map.br_blockcount;
blkcnt -= map.br_blockcount;
/*
* Start the next trans in the chain.
*/
error = xfs_trans_roll(&args->trans, dp);
if (error)
return (error);
}
/*
* Roll through the "value", copying the attribute value to the
* already-allocated blocks. Blocks are written synchronously
* so that we can know they are all on disk before we turn off
* the INCOMPLETE flag.
*/
lblkno = args->rmtblkno;
blkcnt = args->rmtblkcnt;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
valuelen = args->valuelen;
while (valuelen > 0) {
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
struct xfs_buf *bp;
xfs_daddr_t dblkno;
int dblkcnt;
ASSERT(blkcnt > 0);
xfs_bmap_init(args->flist, args->firstblock);
nmap = 1;
error = xfs_bmapi_read(dp, (xfs_fileoff_t)lblkno,
blkcnt, &map, &nmap,
XFS_BMAPI_ATTRFORK);
if (error)
return(error);
ASSERT(nmap == 1);
ASSERT((map.br_startblock != DELAYSTARTBLOCK) &&
(map.br_startblock != HOLESTARTBLOCK));
dblkno = XFS_FSB_TO_DADDR(mp, map.br_startblock),
dblkcnt = XFS_FSB_TO_BB(mp, map.br_blockcount);
bp = xfs_buf_get(mp->m_ddev_targp, dblkno, dblkcnt, 0);
if (!bp)
return ENOMEM;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
bp->b_ops = &xfs_attr3_rmt_buf_ops;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
xfs_attr_rmtval_copyin(mp, bp, args->dp->i_ino, &offset,
&valuelen, &src);
error = xfs_bwrite(bp); /* GROT: NOTE: synchronous write */
xfs_buf_relse(bp);
if (error)
return error;
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
/* roll attribute extent map forwards */
lblkno += map.br_blockcount;
blkcnt -= map.br_blockcount;
}
ASSERT(valuelen == 0);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
return 0;
}
/*
* Remove the value associated with an attribute by deleting the
* out-of-line buffer that it is stored on.
*/
int
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
xfs_attr_rmtval_remove(
struct xfs_da_args *args)
{
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
struct xfs_mount *mp = args->dp->i_mount;
xfs_dablk_t lblkno;
int blkcnt;
int error;
int done;
trace_xfs_attr_rmtval_remove(args);
/*
* Roll through the "value", invalidating the attribute value's blocks.
*/
lblkno = args->rmtblkno;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
blkcnt = args->rmtblkcnt;
while (blkcnt > 0) {
struct xfs_bmbt_irec map;
struct xfs_buf *bp;
xfs_daddr_t dblkno;
int dblkcnt;
int nmap;
/*
* Try to remember where we decided to put the value.
*/
nmap = 1;
error = xfs_bmapi_read(args->dp, (xfs_fileoff_t)lblkno,
blkcnt, &map, &nmap, XFS_BMAPI_ATTRFORK);
if (error)
return(error);
ASSERT(nmap == 1);
ASSERT((map.br_startblock != DELAYSTARTBLOCK) &&
(map.br_startblock != HOLESTARTBLOCK));
dblkno = XFS_FSB_TO_DADDR(mp, map.br_startblock),
dblkcnt = XFS_FSB_TO_BB(mp, map.br_blockcount);
/*
* If the "remote" value is in the cache, remove it.
*/
bp = xfs_incore(mp->m_ddev_targp, dblkno, dblkcnt, XBF_TRYLOCK);
if (bp) {
xfs_buf_stale(bp);
xfs_buf_relse(bp);
bp = NULL;
}
lblkno += map.br_blockcount;
blkcnt -= map.br_blockcount;
}
/*
* Keep de-allocating extents until the remote-value region is gone.
*/
lblkno = args->rmtblkno;
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
blkcnt = args->rmtblkcnt;
done = 0;
while (!done) {
xfs: rework remote attr CRCs Note: this changes the on-disk remote attribute format. I assert that this is OK to do as CRCs are marked experimental and the first kernel it is included in has not yet reached release yet. Further, the userspace utilities are still evolving and so anyone using this stuff right now is a developer or tester using volatile filesystems for testing this feature. Hence changing the format right now to save longer term pain is the right thing to do. The fundamental change is to move from a header per extent in the attribute to a header per filesytem block in the attribute. This means there are more header blocks and the parsing of the attribute data is slightly more complex, but it has the advantage that we always know the size of the attribute on disk based on the length of the data it contains. This is where the header-per-extent method has problems. We don't know the size of the attribute on disk without first knowing how many extents are used to hold it. And we can't tell from a mapping lookup, either, because remote attributes can be allocated contiguously with other attribute blocks and so there is no obvious way of determining the actual size of the atribute on disk short of walking and mapping buffers. The problem with this approach is that if we map a buffer incorrectly (e.g. we make the last buffer for the attribute data too long), we then get buffer cache lookup failure when we map it correctly. i.e. we get a size mismatch on lookup. This is not necessarily fatal, but it's a cache coherency problem that can lead to returning the wrong data to userspace or writing the wrong data to disk. And debug kernels will assert fail if this occurs. I found lots of niggly little problems trying to fix this issue on a 4k block size filesystem, finally getting it to pass with lots of fixes. The thing is, 1024 byte filesystems still failed, and it was getting really complex handling all the corner cases that were showing up. And there were clearly more that I hadn't found yet. It is complex, fragile code, and if we don't fix it now, it will be complex, fragile code forever more. Hence the simple fix is to add a header to each filesystem block. This gives us the same relationship between the attribute data length and the number of blocks on disk as we have without CRCs - it's a linear mapping and doesn't require us to guess anything. It is simple to implement, too - the remote block count calculated at lookup time can be used by the remote attribute set/get/remove code without modification for both CRC and non-CRC filesystems. The world becomes sane again. Because the copy-in and copy-out now need to iterate over each filesystem block, I moved them into helper functions so we separate the block mapping and buffer manupulations from the attribute data and CRC header manipulations. The code becomes much clearer as a result, and it is a lot easier to understand and debug. It also appears to be much more robust - once it worked on 4k block size filesystems, it has worked without failure on 1k block size filesystems, too. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit ad1858d77771172e08016890f0eb2faedec3ecee)
2013-05-21 16:02:08 +08:00
int committed;
xfs_bmap_init(args->flist, args->firstblock);
error = xfs_bunmapi(args->trans, args->dp, lblkno, blkcnt,
XFS_BMAPI_ATTRFORK | XFS_BMAPI_METADATA,
1, args->firstblock, args->flist,
&done);
if (!error) {
error = xfs_bmap_finish(&args->trans, args->flist,
&committed);
}
if (error) {
ASSERT(committed);
args->trans = NULL;
xfs_bmap_cancel(args->flist);
xfs: add CRC protection to remote attributes There are two ways of doing this - the first is to add a CRC to the remote attribute entry in the attribute block. The second is to treat them similar to the remote symlink, where each fragment has it's own header and identifies fragment location in the attribute. The problem with the CRC in the remote attr entry is that we cannot identify the owner of the metadata from the metadata blocks themselves, or where the blocks fit into the remote attribute. The down side to this approach is that we never know when the attribute has been read from disk or not and so we have to verify it every time it is read, and we must calculate it during the create transaction and log it. We do not log CRCs for any other metadata, and so this creates a unique set of coherency problems that, in general, are best avoided. Adding an identifying header to each allocated block allows us to identify each fragment and where in the attribute it is located. It enables us to rebuild the remote attribute from just the raw blocks containing the attribute. It also provides us to do per-block CRCs verification at IO time rather than during the transaction context that creates it or every time it is read into a user buffer. Hence it avoids all the problems that an external, logged CRC has, and provides all the benefits of self identifying metadata. The only complexity is that we have to add a header per fragment, and we don't know how many fragments will be needed prior to allocations. If we take the symlink example, the header is 56 bytes and hence for a 4k block size filesystem, in the worst case 16 headers requires 1 extra block for the 64k attribute data. For 512 byte filesystems the worst case is an extra block for every 9 fragments (i.e. 16 extra blocks in the worse case). This will be very rare and so it's not really a major concern. Because allocation is done in two steps - the first finds a hole large enough in the attribute file, the second does the allocation - we only need to find a hole big enough for a worst case allocation. We only need to allocate enough extra blocks for number of headers required by the fragments, and we can calculate that as we go.... Hence it really only makes sense to use the same model as for symlinks - it doesn't add that much complexity, does not require an attribute tree format change, and does not require logging calculated CRC values. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
2013-04-03 13:11:28 +08:00
return error;
}
/*
* bmap_finish() may have committed the last trans and started
* a new one. We need the inode to be in all transactions.
*/
if (committed)
xfs_trans_ijoin(args->trans, args->dp, 0);
/*
* Close out trans and start the next one in the chain.
*/
error = xfs_trans_roll(&args->trans, args->dp);
if (error)
return (error);
}
return(0);
}