There is a risk that the fastmap anchor PEB is alternating between
just two PEBs, the current anchor and the previous anchor that was just
deleted. As the fastmap pools gets the first take on free PEBs, the
pools may leave no free PEBs to be selected as the new anchor,
resulting in the two PEBs alternating behaviour. If the anchor PEBs gets
a high erase count the PEBs will not be used by the pools but remain in
ubi->free, even more increasing the likelihood they will be used as
anchors.
Getting stuck using only a couple of PEBs continuously will result in an
uneven wear, eventually leading to failure.
To fix this:
- Choose the fastmap anchor when the most free PEBs are available. This is
during rebuilding of the fastmap pools, after the unused pool PEBs are
added to ubi->free but before the pools are populated again from the
free PEBs. Also reserve an additional second best PEB as a candidate
for the next time the fast map anchor is updated. If a better PEB is
found the next time the fast map anchor is updated, the candidate is
made available for building the pools.
- Enable anchor move within the anchor area again as it is useful for
distributing wear.
- The anchor candidate for the next fastmap update is the most suited free
PEB. Check this PEB's erase count during wear leveling. If the wear
leveling limit is exceeded, the PEB is considered unsuitable for now. As
all other non used anchor area PEBs should be even worse, free up the
used anchor area PEB with the lowest erase count.
Signed-off-by: Arne Edholm <arne.edholm@axis.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
If "seen_pebs = init_seen(ubi);" fails then "seen_pebs" is an error pointer
and we try to kfree() it which results in an Oops.
This patch re-arranges the error handling so now it only frees things
which have been allocated successfully.
Fixes: daef3dd1f0 ("UBI: Fastmap: Add self check to detect absent PEBs")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
set_seen() sets the bit corresponding to the PEB number in the bitmap,
so when self_check_seen() wants to find PEBs that haven't been seen we
have to print the PEBs that have their bit cleared, not the ones which
have it set.
Fixes: 5d71afb008 ("ubi: Use bitmaps in Fastmap self-check code")
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
When a new fastmap is about to be written UBI must make sure it has a
free block for a fastmap anchor available. For this ubi_update_fastmap()
calls ubi_ensure_anchor_pebs(). This stopped working with 2e8f08deab
("ubi: Fix races around ubi_refill_pools()"), with this commit the wear
leveling code is blocked and can no longer produce free PEBs. UBI then
more often than not falls back to write the new fastmap anchor to the
same block it was already on which means the same erase block gets
erased during each fastmap write and wears out quite fast.
As the locking prevents us from producing the anchor PEB when we
actually need it, this patch changes the strategy for creating the
anchor PEB. We no longer create it on demand right before we want to
write a fastmap, but instead we create an anchor PEB right after we have
written a fastmap. This gives us enough time to produce a new anchor PEB
before it is needed. To make sure we have an anchor PEB for the very
first fastmap write we call ubi_ensure_anchor_pebs() during
initialisation as well.
Fixes: 2e8f08deab ("ubi: Fix races around ubi_refill_pools()")
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation version 2 this program is distributed
in the hope that it will be useful but without any warranty without
even the implied warranty of merchantability or fitness for a
particular purpose see the gnu general public license for more
details
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 97 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190529141901.025053186@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Maintain a bitmap to keep track of which LEB->PEB mapping
was checked already.
That way we have to read back VID headers only once.
Signed-off-by: Richard Weinberger <richard@nod.at>
The pointer p is being initialized with one value and a few lines
later being set to a newer replacement value. Clean up the code by
using the latter assignment to p as the initial value. Cleans up
clang warning:
drivers/mtd/ubi/fastmap.c:217:19: warning: Value stored to 'p'
during its initialization is never read
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Memory allocated by kmem_cache_alloc() should not be deallocated with
kfree(). Use kmem_cache_free() instead.
Signed-off-by: Pan Bian <bianpan2016@163.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Trivial fix to spelling mistake in ubi_err error message
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Booting with UBI fastmap and SLUB debugging enabled results in the
following splats. The problem is that ubi_scan_fastmap() moves the
fastmap blocks from the scan_ai (allocated in scan_fast()) to the ai
allocated in ubi_attach(). This results in two problems:
- When the scan_ai is freed, aebs which were allocated from its slab
cache are still in use.
- When the other ai is being destroyed in destroy_ai(), the
arguments to kmem_cache_free() call are incorrect since aebs on its
->fastmap list were allocated with a slab cache from a differnt ai.
Fix this by making a copy of the aebs in ubi_scan_fastmap() instead of
moving them.
=============================================================================
BUG ubi_aeb_slab_cache (Not tainted): Objects remaining in ubi_aeb_slab_cache on __kmem_cache_shutdown()
-----------------------------------------------------------------------------
INFO: Slab 0xbfd2da3c objects=17 used=1 fp=0xb33d7748 flags=0x40000080
CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3
[<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c)
[<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0)
[<804a3274>] (dump_stack) from [<8026c47c>] (slab_err+0x78/0x88)
[<8026c47c>] (slab_err) from [<802735bc>] (__kmem_cache_shutdown+0x180/0x3e0)
[<802735bc>] (__kmem_cache_shutdown) from [<8024e13c>] (shutdown_cache+0x1c/0x60)
[<8024e13c>] (shutdown_cache) from [<8024ed64>] (kmem_cache_destroy+0x19c/0x20c)
[<8024ed64>] (kmem_cache_destroy) from [<8057cc14>] (destroy_ai+0x1dc/0x1e8)
[<8057cc14>] (destroy_ai) from [<8057f04c>] (ubi_attach+0x3f4/0x450)
[<8057f04c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8)
[<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8)
[<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00)
[<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64)
[<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c)
INFO: Object 0xb33d7e88 @offset=3720
INFO: Allocated in scan_peb+0x608/0x81c age=72 cpu=1 pid=118
kmem_cache_alloc+0x3b0/0x43c
scan_peb+0x608/0x81c
ubi_attach+0x124/0x450
ubi_attach_mtd_dev+0x60c/0xff8
ctrl_cdev_ioctl+0x110/0x2b8
do_vfs_ioctl+0xac/0xa00
SyS_ioctl+0x3c/0x64
ret_fast_syscall+0x0/0x1c
kmem_cache_destroy ubi_aeb_slab_cache: Slab cache still has objects
CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3
[<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c)
[<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0)
[<804a3274>] (dump_stack) from [<8024ed80>] (kmem_cache_destroy+0x1b8/0x20c)
[<8024ed80>] (kmem_cache_destroy) from [<8057cc14>] (destroy_ai+0x1dc/0x1e8)
[<8057cc14>] (destroy_ai) from [<8057f04c>] (ubi_attach+0x3f4/0x450)
[<8057f04c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8)
[<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8)
[<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00)
[<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64)
[<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c)
cache_from_obj: Wrong slab cache. ubi_aeb_slab_cache but object is from ubi_aeb_slab_cache
------------[ cut here ]------------
WARNING: CPU: 1 PID: 118 at mm/slab.h:354 kmem_cache_free+0x39c/0x450
Modules linked in:
CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3
[<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c)
[<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0)
[<804a3274>] (dump_stack) from [<80120e40>] (__warn+0xf4/0x10c)
[<80120e40>] (__warn) from [<80120f20>] (warn_slowpath_null+0x28/0x30)
[<80120f20>] (warn_slowpath_null) from [<80271fe0>] (kmem_cache_free+0x39c/0x450)
[<80271fe0>] (kmem_cache_free) from [<8057cb88>] (destroy_ai+0x150/0x1e8)
[<8057cb88>] (destroy_ai) from [<8057ef1c>] (ubi_attach+0x2c4/0x450)
[<8057ef1c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8)
[<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8)
[<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00)
[<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64)
[<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c)
---[ end trace 2bd8396277fd0a0b ]---
=============================================================================
BUG ubi_aeb_slab_cache (Tainted: G B W ): page slab pointer corrupt.
-----------------------------------------------------------------------------
INFO: Allocated in scan_peb+0x608/0x81c age=104 cpu=1 pid=118
kmem_cache_alloc+0x3b0/0x43c
scan_peb+0x608/0x81c
ubi_attach+0x124/0x450
ubi_attach_mtd_dev+0x60c/0xff8
ctrl_cdev_ioctl+0x110/0x2b8
do_vfs_ioctl+0xac/0xa00
SyS_ioctl+0x3c/0x64
ret_fast_syscall+0x0/0x1c
INFO: Slab 0xbfd2da3c objects=17 used=1 fp=0xb33d7748 flags=0x40000081
INFO: Object 0xb33d7e88 @offset=3720 fp=0xb33d7da0
Redzone b33d7e80: cc cc cc cc cc cc cc cc ........
Object b33d7e88: 02 00 00 00 01 00 00 00 00 f0 ff 7f ff ff ff ff ................
Object b33d7e98: 00 00 00 00 00 00 00 00 bd 16 00 00 00 00 00 00 ................
Object b33d7ea8: 00 01 00 00 00 02 00 00 00 00 00 00 00 00 00 00 ................
Redzone b33d7eb8: cc cc cc cc ....
Padding b33d7f60: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
CPU: 1 PID: 118 Comm: ubiattach Tainted: G B W 4.9.15 #3
[<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c)
[<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0)
[<804a3274>] (dump_stack) from [<80271770>] (free_debug_processing+0x320/0x3c4)
[<80271770>] (free_debug_processing) from [<80271ad0>] (__slab_free+0x2bc/0x430)
[<80271ad0>] (__slab_free) from [<80272024>] (kmem_cache_free+0x3e0/0x450)
[<80272024>] (kmem_cache_free) from [<8057cb88>] (destroy_ai+0x150/0x1e8)
[<8057cb88>] (destroy_ai) from [<8057ef1c>] (ubi_attach+0x2c4/0x450)
[<8057ef1c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8)
[<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8)
[<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00)
[<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64)
[<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c)
FIX ubi_aeb_slab_cache: Object at 0xb33d7e88 not freed
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Commit e96a8a3bb6 ("UBI: Fastmap: Do not add vol if it already
exists") introduced a bug by changing the possible error codes returned
by add_vol():
- this function no longer returns NULL in case of allocation failure
but return ERR_PTR(-ENOMEM)
- when a duplicate entry in the volume RB tree is found it returns
ERR_PTR(-EEXIST) instead of ERR_PTR(-EINVAL)
Fix the tests done on add_vol() return val to match this new behavior.
Fixes: e96a8a3bb6 ("UBI: Fastmap: Do not add vol if it already exists")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Acked-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Static analysis by CoverityScan detected the ec and pnum
arguments are in the wrong order on a call to ubi_alloc_aeb.
Swap the order to fix this.
Fixes: 91f4285fe3 ("UBI: provide helpers to allocate and free aeb elements")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Usually Fastmap is free to consider every PEB in one of the pools
as newer than the existing PEB. Since PEBs in a pool are by definition
newer than everything else.
But update_vol() missed the case that a pool can contain more than
one candidate.
Cc: <stable@vger.kernel.org>
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
When writing a new Fastmap the first thing that happens
is refilling the pools in memory.
At this stage it is possible that new PEBs from the new pools
get already claimed and written with data.
If this happens before the new Fastmap data structure hits the
flash and we face power cut the freshly written PEB will not
scanned and unnoticed.
Solve the issue by locking the pools until Fastmap is written.
Cc: <stable@vger.kernel.org>
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
Currently, all VID headers are allocated and freed using the
ubi_zalloc_vid_hdr() and ubi_free_vid_hdr() function. These functions
make sure to align allocation on ubi->vid_hdr_alsize and adjust the
vid_hdr pointer to match the ubi->vid_hdr_shift requirements.
This works fine, but is a bit convoluted.
Moreover, the future introduction of LEB consolidation (needed to support
MLC/TLC NANDs) will allows a VID buffer to contain more than one VID
header.
Hence the creation of a ubi_vid_io_buf struct to attach extra information
to the VID header.
We currently only store the actual pointer of the underlying buffer, but
will soon add the number of VID headers contained in the buffer.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
This is part of our attempt to hide EBA internals from other part of the
implementation in order to easily adapt it to the MLC needs.
Here we are creating an ubi_eba_leb_desc struct to hide the way we keep
track of the LEB to PEB mapping.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
This not only hides the aeb allocation internals (which is always good in
case we ever want to change the allocation system), but also helps us
factorize the initialization of some common fields (ec and pnum).
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
ubi_io_{read,write}_data() are wrappers around ubi_io_{read/write}() that
are used to read/write eraseblock payload data, which is exactly what
fastmap does when calling ubi_io_{read,write}().
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Use the ubi_rb_for_each_entry() macro instead of open-coding it.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Volume creation/search code is duplicated in a few places (fastmap and
non fastmap code). Create some helpers to factorize the code.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
scan_pool() does not mark the PEB for scrubing when bitflips are
detected in the EC header of a free PEB (VID header region left to
0xff).
Make sure we scrub the PEB in this case.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
process_pool_aeb() does several times the be32_to_cpu(new_vh->vol_id)
operation. Create a temporary variable and do it once.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
process_pool_aeb() re-implements the logic found in ubi_find_volume().
Call ubi_find_volume() to avoid this duplication.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
This helps to detect cases where an user copies an UBI image to
another target with different bad blocks.
Signed-off-by: Richard Weinberger <richard@nod.at>
Introduce a new list to the UBI attach information
object to be able to deal better with old and corrupted
Fastmap eraseblocks.
Also move more Fastmap specific code into fastmap.c.
Signed-off-by: Richard Weinberger <richard@nod.at>
Ezequiel reported that he's facing UBI going into read-only
mode after power cut. It turned out that this behavior happens
only when updating a static volume is interrupted and Fastmap is
used.
A possible trace can look like:
ubi0 warning: ubi_io_read_vid_hdr [ubi]: no VID header found at PEB 2323, only 0xFF bytes
ubi0 warning: ubi_eba_read_leb [ubi]: switch to read-only mode
CPU: 0 PID: 833 Comm: ubiupdatevol Not tainted 4.6.0-rc2-ARCH #4
Hardware name: SAMSUNG ELECTRONICS CO., LTD. 300E4C/300E5C/300E7C/NP300E5C-AD8AR, BIOS P04RAP 10/15/2012
0000000000000286 00000000eba949bd ffff8800c45a7b38 ffffffff8140d841
ffff8801964be000 ffff88018eaa4800 ffff8800c45a7bb8 ffffffffa003abf6
ffffffff850e2ac0 8000000000000163 ffff8801850e2ac0 ffff8801850e2ac0
Call Trace:
[<ffffffff8140d841>] dump_stack+0x63/0x82
[<ffffffffa003abf6>] ubi_eba_read_leb+0x486/0x4a0 [ubi]
[<ffffffffa00453b3>] ubi_check_volume+0x83/0xf0 [ubi]
[<ffffffffa0039d97>] ubi_open_volume+0x177/0x350 [ubi]
[<ffffffffa00375d8>] vol_cdev_open+0x58/0xb0 [ubi]
[<ffffffff8124b08e>] chrdev_open+0xae/0x1d0
[<ffffffff81243bcf>] do_dentry_open+0x1ff/0x300
[<ffffffff8124afe0>] ? cdev_put+0x30/0x30
[<ffffffff81244d36>] vfs_open+0x56/0x60
[<ffffffff812545f4>] path_openat+0x4f4/0x1190
[<ffffffff81256621>] do_filp_open+0x91/0x100
[<ffffffff81263547>] ? __alloc_fd+0xc7/0x190
[<ffffffff812450df>] do_sys_open+0x13f/0x210
[<ffffffff812451ce>] SyS_open+0x1e/0x20
[<ffffffff81a99e32>] entry_SYSCALL_64_fastpath+0x1a/0xa4
UBI checks static volumes for data consistency and reads the
whole volume upon first open. If the volume is found erroneous
users of UBI cannot read from it, but another volume update is
possible to fix it. The check is performed by running
ubi_eba_read_leb() on every allocated LEB of the volume.
For static volumes ubi_eba_read_leb() computes the checksum of all
data stored in a LEB. To verify the computed checksum it has to read
the LEB's volume header which stores the original checksum.
If the volume header is not found UBI treats this as fatal internal
error and switches to RO mode. If the UBI device was attached via a
full scan the assumption is correct, the volume header has to be
present as it had to be there while scanning to get known as mapped.
If the attach operation happened via Fastmap the assumption is no
longer correct. When attaching via Fastmap UBI learns the mapping
table from Fastmap's snapshot of the system state and not via a full
scan. It can happen that a LEB got unmapped after a Fastmap was
written to the flash. Then UBI can learn the LEB still as mapped and
accessing it returns only 0xFF bytes. As UBI is not a FTL it is
allowed to have mappings to empty PEBs, it assumes that the layer
above takes care of LEB accounting and referencing.
UBIFS does so using the LEB property tree (LPT).
For static volumes UBI blindly assumes that all LEBs are present and
therefore special actions have to be taken.
The described situation can happen when updating a static volume is
interrupted, either by a user or a power cut.
The volume update code first unmaps all LEBs of a volume and then
writes LEB by LEB. If the sequence of operations is interrupted UBI
detects this either by the absence of LEBs, no volume header present
at scan time, or corrupted payload, detected via checksum.
In the Fastmap case the former method won't trigger as no scan
happened and UBI automatically thinks all LEBs are present.
Only by reading data from a LEB it detects that the volume header is
missing and incorrectly treats this as fatal error.
To deal with the situation ubi_eba_read_leb() from now on checks
whether we attached via Fastmap and handles the absence of a
volume header like a data corruption error.
This way interrupted static volume updates will correctly get detected
also when Fastmap is used.
Cc: <stable@vger.kernel.org>
Reported-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Tested-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Signed-off-by: Richard Weinberger <richard@nod.at>
The PEB array is an array of __be32, so let's fix the
scan_pool() prototype accordingly.
Signed-off-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Signed-off-by: Richard Weinberger <richard@nod.at>
During fastmap attaching, check if a volume already exists when adding
the volume to volume tree. NOTE that the issue cannot happen, only if
the on-flash fastmap data is modified.
Signed-off-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
s/fmpl1/fmpl
s/fmpl2/fmpl_wl
Add "WL" to the error message when wrong WL pool magic number is detected.
Signed-off-by: Sheng Yong <shengyong1@huawei.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
This self check allows Fastmap to detect absent PEBs while
writing a new fastmap to the MTD device.
It will help to find implementation issues in Fastmap.
Signed-off-by: Richard Weinberger <richard@nod.at>
If UBI is unable to write the fastmap to the device
we have make sure that upon next attach UBI will fall
back to scanning mode.
In case we cannot ensure that they only thing we can do
is falling back to read-only mode.
The current error handling code is not powercut proof.
It could happen that a powercut while invalidating would
lead to a state where an too old fastmap could be used upon
attach.
This patch addresses the issue by writing a fake fastmap
super block to a fresh PEB instead of reerasing the existing one.
The fake fastmap super block will UBI case to do a full scan.
Signed-off-by: Richard Weinberger <richard@nod.at>
The current code assumes that each fastmap has the same amount of PEBs.
So far this is true but will change soon.
Signed-off-by: Richard Weinberger <richard@nod.at>
a) Rename ubi->fm_sem to ubi->fm_eba_sem as this semaphore
protects EBA changes.
b) Turn ubi->fm_mutex into a rw semaphore. It will still serialize
fastmap writes but also ensures that ubi_wl_put_peb() is not
interrupted by a fastmap write. We use a rw semaphore to allow
ubi_wl_put_peb() still to be executed in parallel if no fastmap
write is happening.
Signed-off-by: Richard Weinberger <richard@nod.at>
This logic is in vain as we treat protected PEBs also as used, so this
case must not happen.
If a PEB is found which is in the EBA table but not known as used
has to be issued as fatal error.
Signed-off-by: Richard Weinberger <richard@nod.at>
It is legal to have PEBs left in the used list.
This can happen if UBI copies a PEB and a powercut happens
between writing a new fastmap and adding this PEB into the EBA table.
In this case the old PEB will be used.
Signed-off-by: Richard Weinberger <richard@nod.at>
There is no need to allocate new ones every time, we can reuse
the existing ones.
This makes the code cleaner and more easy to follow.
Signed-off-by: Richard Weinberger <richard@nod.at>
Reviewed-by: Tanya Brokhman <tlinder@codeaurora.org>
Reviewed-by: Guido Martínez <guido@vanguardiasur.com.ar>
Fastmap can miss a PEB if it is in the protection queue
and not jet in the used tree.
Treat every protected PEB as used.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
If there is more then one UBI device mounted, there is no way to
distinguish between messages from different UBI devices.
Add device number to all ubi layer message types.
The R/O block driver messages were replaced by pr_* since
ubi_device structure is not used by it.
Amended a bit by Artem.
Signed-off-by: Tanya Brokhman <tlinder@codeaurora.org>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
We need to add fm_sb too.
Signed-off-by: Richard Weinberger <richard@nod.at>
Reviewed-by: Tanya Brokhman <tlinder@codeaurora.org>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
I ran into this error after a ubiupdatevol, because I forgot to backport
e9110361a9 UBI: fix the volumes tree sorting criteria.
UBI error: process_pool_aeb: orphaned volume in fastmap pool
UBI error: ubi_scan_fastmap: Attach by fastmap failed, doing a full scan!
kmem_cache_destroy ubi_ainf_peb_slab: Slab cache still has objects
CPU: 0 PID: 1 Comm: swapper Not tainted 3.14.18-00053-gf05cac8dbf85 #1
[<c000d298>] (unwind_backtrace) from [<c000baa8>] (show_stack+0x10/0x14)
[<c000baa8>] (show_stack) from [<c01b7a68>] (destroy_ai+0x230/0x244)
[<c01b7a68>] (destroy_ai) from [<c01b8fd4>] (ubi_attach+0x98/0x1ec)
[<c01b8fd4>] (ubi_attach) from [<c01ade90>] (ubi_attach_mtd_dev+0x2b8/0x868)
[<c01ade90>] (ubi_attach_mtd_dev) from [<c038b510>] (ubi_init+0x1dc/0x2ac)
[<c038b510>] (ubi_init) from [<c0008860>] (do_one_initcall+0x94/0x140)
[<c0008860>] (do_one_initcall) from [<c037aadc>] (kernel_init_freeable+0xe8/0x1b0)
[<c037aadc>] (kernel_init_freeable) from [<c02730ac>] (kernel_init+0x8/0xe4)
[<c02730ac>] (kernel_init) from [<c00093f0>] (ret_from_fork+0x14/0x24)
UBI: scanning is finished
Freeing the cache in the error path fixes the Slab error.
Tested on at91sam9g35 (3.14.18+fastmap backports)
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Cc: stable <stable@vger.kernel.org> # 3.10+