Commit Graph

96 Commits

Author SHA1 Message Date
Thomas Gleixner
1a59d1b8e0 treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 156
Based on 1 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license as published by
  the free software foundation either version 2 of the license or at
  your option any later version this program is distributed in the
  hope that it will be useful but without any warranty without even
  the implied warranty of merchantability or fitness for a particular
  purpose see the gnu general public license for more details you
  should have received a copy of the gnu general public license along
  with this program if not write to the free software foundation inc
  59 temple place suite 330 boston ma 02111 1307 usa

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 1334 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070033.113240726@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30 11:26:35 -07:00
Kees Cook
6da2ec5605 treewide: kmalloc() -> kmalloc_array()
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
patch replaces cases of:

        kmalloc(a * b, gfp)

with:
        kmalloc_array(a * b, gfp)

as well as handling cases of:

        kmalloc(a * b * c, gfp)

with:

        kmalloc(array3_size(a, b, c), gfp)

as it's slightly less ugly than:

        kmalloc_array(array_size(a, b), c, gfp)

This does, however, attempt to ignore constant size factors like:

        kmalloc(4 * 1024, gfp)

though any constants defined via macros get caught up in the conversion.

Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.

The tools/ directory was manually excluded, since it has its own
implementation of kmalloc().

The Coccinelle script used for this was:

// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@

(
  kmalloc(
-	(sizeof(TYPE)) * E
+	sizeof(TYPE) * E
  , ...)
|
  kmalloc(
-	(sizeof(THING)) * E
+	sizeof(THING) * E
  , ...)
)

// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@

(
  kmalloc(
-	sizeof(u8) * (COUNT)
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(__u8) * (COUNT)
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(char) * (COUNT)
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(unsigned char) * (COUNT)
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(u8) * COUNT
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(__u8) * COUNT
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(char) * COUNT
+	COUNT
  , ...)
|
  kmalloc(
-	sizeof(unsigned char) * COUNT
+	COUNT
  , ...)
)

// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@

(
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * (COUNT_ID)
+	COUNT_ID, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * COUNT_ID
+	COUNT_ID, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * (COUNT_CONST)
+	COUNT_CONST, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * COUNT_CONST
+	COUNT_CONST, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * (COUNT_ID)
+	COUNT_ID, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * COUNT_ID
+	COUNT_ID, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * (COUNT_CONST)
+	COUNT_CONST, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * COUNT_CONST
+	COUNT_CONST, sizeof(THING)
  , ...)
)

// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@

- kmalloc
+ kmalloc_array
  (
-	SIZE * COUNT
+	COUNT, SIZE
  , ...)

// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@

(
  kmalloc(
-	sizeof(TYPE) * (COUNT) * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  kmalloc(
-	sizeof(TYPE) * (COUNT) * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  kmalloc(
-	sizeof(TYPE) * COUNT * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  kmalloc(
-	sizeof(TYPE) * COUNT * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(TYPE))
  , ...)
|
  kmalloc(
-	sizeof(THING) * (COUNT) * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  kmalloc(
-	sizeof(THING) * (COUNT) * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  kmalloc(
-	sizeof(THING) * COUNT * (STRIDE)
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
|
  kmalloc(
-	sizeof(THING) * COUNT * STRIDE
+	array3_size(COUNT, STRIDE, sizeof(THING))
  , ...)
)

// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@

(
  kmalloc(
-	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
  , ...)
|
  kmalloc(
-	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
  , ...)
|
  kmalloc(
-	sizeof(THING1) * sizeof(THING2) * COUNT
+	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
  , ...)
|
  kmalloc(
-	sizeof(THING1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
  , ...)
|
  kmalloc(
-	sizeof(TYPE1) * sizeof(THING2) * COUNT
+	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
  , ...)
|
  kmalloc(
-	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
  , ...)
)

// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@

(
  kmalloc(
-	(COUNT) * STRIDE * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	COUNT * (STRIDE) * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	COUNT * STRIDE * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	(COUNT) * (STRIDE) * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	COUNT * (STRIDE) * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	(COUNT) * STRIDE * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	(COUNT) * (STRIDE) * (SIZE)
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
|
  kmalloc(
-	COUNT * STRIDE * SIZE
+	array3_size(COUNT, STRIDE, SIZE)
  , ...)
)

// Any remaining multi-factor products, first at least 3-factor products,
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@

(
  kmalloc(C1 * C2 * C3, ...)
|
  kmalloc(
-	(E1) * E2 * E3
+	array3_size(E1, E2, E3)
  , ...)
|
  kmalloc(
-	(E1) * (E2) * E3
+	array3_size(E1, E2, E3)
  , ...)
|
  kmalloc(
-	(E1) * (E2) * (E3)
+	array3_size(E1, E2, E3)
  , ...)
|
  kmalloc(
-	E1 * E2 * E3
+	array3_size(E1, E2, E3)
  , ...)
)

// And then all remaining 2 factors products when they're not all constants,
// keeping sizeof() as the second factor argument.
@@
expression THING, E1, E2;
type TYPE;
constant C1, C2, C3;
@@

(
  kmalloc(sizeof(THING) * C2, ...)
|
  kmalloc(sizeof(TYPE) * C2, ...)
|
  kmalloc(C1 * C2 * C3, ...)
|
  kmalloc(C1 * C2, ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * (E2)
+	E2, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(TYPE) * E2
+	E2, sizeof(TYPE)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * (E2)
+	E2, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	sizeof(THING) * E2
+	E2, sizeof(THING)
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	(E1) * E2
+	E1, E2
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	(E1) * (E2)
+	E1, E2
  , ...)
|
- kmalloc
+ kmalloc_array
  (
-	E1 * E2
+	E1, E2
  , ...)
)

Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-12 16:19:22 -07:00
Richard Weinberger
3e5e4335cc ubi: fastmap: Detect EBA mismatches on-the-fly
Now we have the machinery to detect EBA mismatches on-the-fly
by comparing the in-memory volume ID and LEB number with the found
VID header.
This helps to detect malfunction of Fastmap.

Signed-off-by: Richard Weinberger <richard@nod.at>
2018-06-07 15:53:17 +02:00
Richard Weinberger
34653fd8c4 ubi: fastmap: Check each mapping only once
Maintain a bitmap to keep track of which LEB->PEB mapping
was checked already.
That way we have to read back VID headers only once.

Signed-off-by: Richard Weinberger <richard@nod.at>
2018-06-07 15:53:16 +02:00
Richard Weinberger
781932375f ubi: fastmap: Correctly handle interrupted erasures in EBA
Fastmap cannot track the LEB unmap operation, therefore it can
happen that after an interrupted erasure the mapping still looks
good from Fastmap's point of view, while reading from the PEB will
cause an ECC error and confuses the upper layer.

Instead of teaching users of UBI how to deal with that, we read back
the VID header and check for errors. If the PEB is empty or shows ECC
errors we fixup the mapping and schedule the PEB for erasure.

Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Cc: <stable@vger.kernel.org>
Reported-by: martin bayern <Martinbayern@outlook.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2018-06-07 15:53:16 +02:00
Sascha Hauer
01f196945a ubi: Fix copy/paste error in function documentation
The function documentation of leb_write_trylock is copied from
leb_write_lock. Replace the function name with the correct one.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-18 00:18:51 +01:00
Geert Uytterhoeven
884a3b6478 UBI: Fix crash in try_recover_peb()
drivers/mtd/ubi/eba.c: In function ‘try_recover_peb’:
    drivers/mtd/ubi/eba.c:744: warning: ‘vid_hdr’ is used uninitialized in this function

The pointer vid_hdr is indeed not initialized, leading to a crash when
it is dereferenced.

Fix this by obtaining the pointer from the VID buffer, like is done
everywhere else.

Fixes: 3291b52f9f ("UBI: introduce the VID buffer concept")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Reviewed-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-20 00:06:06 +02:00
Richard Weinberger
2e8f08deab ubi: Fix races around ubi_refill_pools()
When writing a new Fastmap the first thing that happens
is refilling the pools in memory.
At this stage it is possible that new PEBs from the new pools
get already claimed and written with data.
If this happens before the new Fastmap data structure hits the
flash and we face power cut the freshly written PEB will not
scanned and unnoticed.

Solve the issue by locking the pools until Fastmap is written.

Cc: <stable@vger.kernel.org>
Fixes: dbb7d2a88d ("UBI: Add fastmap core")
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:54:01 +02:00
Boris Brezillon
3291b52f9f UBI: introduce the VID buffer concept
Currently, all VID headers are allocated and freed using the
ubi_zalloc_vid_hdr() and ubi_free_vid_hdr() function. These functions
make sure to align allocation on ubi->vid_hdr_alsize and adjust the
vid_hdr pointer to match the ubi->vid_hdr_shift requirements.
This works fine, but is a bit convoluted.
Moreover, the future introduction of LEB consolidation (needed to support
MLC/TLC NANDs) will allows a VID buffer to contain more than one VID
header.

Hence the creation of a ubi_vid_io_buf struct to attach extra information
to the VID header.

We currently only store the actual pointer of the underlying buffer, but
will soon add the number of VID headers contained in the buffer.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon
799dca34ac UBI: hide EBA internals
Create a private ubi_eba_table struct to hide EBA internals and provide
helpers to allocate, destroy, copy and assing an EBA table to a volume.

Now that external EBA users are using helpers to query/modify the EBA
state we can safely change the internal representation, which will be
needed to support the LEB consolidation concept.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon
1f81a5ccab UBI: provide an helper to query LEB information
This is part of our attempt to hide EBA internals from other part of the
implementation in order to easily adapt it to the MLC needs.

Here we are creating an ubi_eba_leb_desc struct to hide the way we keep
track of the LEB to PEB mapping.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon
7554769641 UBI: provide an helper to check whether a LEB is mapped or not
This is part of the process of hiding UBI EBA's internal to other part of
the UBI implementation, so that we can add new information to the EBA
table without having to patch different places in the UBI code.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon
2d78aee426 UBI: simplify LEB write and atomic LEB change code
ubi_eba_write_leb(), ubi_eba_write_leb_st() and
ubi_eba_atomic_leb_change() are using a convoluted retry/exit path.
Add the try_write_vid_and_data() function to simplify the retry logic
and make sure we have a single exit path instead of manually releasing
the resources in each error path.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Boris Brezillon
f036dfeb85 UBI: simplify recover_peb() code
recover_peb() is using a convoluted retry/exit path. Add try_recover_peb()
to simplify the retry logic and make sure we have a single exit path
instead of manually releasing the resource in each error path.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02 22:48:14 +02:00
Richard Weinberger
972228d874 ubi: Make recover_peb power cut aware
recover_peb() was never power cut aware,
if a power cut happened right after writing the VID header
upon next attach UBI would blindly use the new partial written
PEB and all data from the old PEB is lost.

In order to make recover_peb() power cut aware, write the new
VID with a proper crc and copy_flag set such that the UBI attach
process will detect whether the new PEB is completely written
or not.
We cannot directly use ubi_eba_atomic_leb_change() since we'd
have to unlock the LEB which is facing a write error.

Cc: stable@vger.kernel.org
Reported-by: Jörg Pfähler <pfaehler@isse.de>
Reviewed-by: Jörg Pfähler <pfaehler@isse.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-06-23 00:29:32 +02:00
Richard Weinberger
1900149c83 UBI: Fix static volume checks when Fastmap is used
Ezequiel reported that he's facing UBI going into read-only
mode after power cut. It turned out that this behavior happens
only when updating a static volume is interrupted and Fastmap is
used.

A possible trace can look like:
ubi0 warning: ubi_io_read_vid_hdr [ubi]: no VID header found at PEB 2323, only 0xFF bytes
ubi0 warning: ubi_eba_read_leb [ubi]: switch to read-only mode
CPU: 0 PID: 833 Comm: ubiupdatevol Not tainted 4.6.0-rc2-ARCH #4
Hardware name: SAMSUNG ELECTRONICS CO., LTD. 300E4C/300E5C/300E7C/NP300E5C-AD8AR, BIOS P04RAP 10/15/2012
0000000000000286 00000000eba949bd ffff8800c45a7b38 ffffffff8140d841
ffff8801964be000 ffff88018eaa4800 ffff8800c45a7bb8 ffffffffa003abf6
ffffffff850e2ac0 8000000000000163 ffff8801850e2ac0 ffff8801850e2ac0
Call Trace:
[<ffffffff8140d841>] dump_stack+0x63/0x82
[<ffffffffa003abf6>] ubi_eba_read_leb+0x486/0x4a0 [ubi]
[<ffffffffa00453b3>] ubi_check_volume+0x83/0xf0 [ubi]
[<ffffffffa0039d97>] ubi_open_volume+0x177/0x350 [ubi]
[<ffffffffa00375d8>] vol_cdev_open+0x58/0xb0 [ubi]
[<ffffffff8124b08e>] chrdev_open+0xae/0x1d0
[<ffffffff81243bcf>] do_dentry_open+0x1ff/0x300
[<ffffffff8124afe0>] ? cdev_put+0x30/0x30
[<ffffffff81244d36>] vfs_open+0x56/0x60
[<ffffffff812545f4>] path_openat+0x4f4/0x1190
[<ffffffff81256621>] do_filp_open+0x91/0x100
[<ffffffff81263547>] ? __alloc_fd+0xc7/0x190
[<ffffffff812450df>] do_sys_open+0x13f/0x210
[<ffffffff812451ce>] SyS_open+0x1e/0x20
[<ffffffff81a99e32>] entry_SYSCALL_64_fastpath+0x1a/0xa4

UBI checks static volumes for data consistency and reads the
whole volume upon first open. If the volume is found erroneous
users of UBI cannot read from it, but another volume update is
possible to fix it. The check is performed by running
ubi_eba_read_leb() on every allocated LEB of the volume.
For static volumes ubi_eba_read_leb() computes the checksum of all
data stored in a LEB. To verify the computed checksum it has to read
the LEB's volume header which stores the original checksum.
If the volume header is not found UBI treats this as fatal internal
error and switches to RO mode. If the UBI device was attached via a
full scan the assumption is correct, the volume header has to be
present as it had to be there while scanning to get known as mapped.
If the attach operation happened via Fastmap the assumption is no
longer correct. When attaching via Fastmap UBI learns the mapping
table from Fastmap's snapshot of the system state and not via a full
scan. It can happen that a LEB got unmapped after a Fastmap was
written to the flash. Then UBI can learn the LEB still as mapped and
accessing it returns only 0xFF bytes. As UBI is not a FTL it is
allowed to have mappings to empty PEBs, it assumes that the layer
above takes care of LEB accounting and referencing.
UBIFS does so using the LEB property tree (LPT).
For static volumes UBI blindly assumes that all LEBs are present and
therefore special actions have to be taken.

The described situation can happen when updating a static volume is
interrupted, either by a user or a power cut.
The volume update code first unmaps all LEBs of a volume and then
writes LEB by LEB. If the sequence of operations is interrupted UBI
detects this either by the absence of LEBs, no volume header present
at scan time, or corrupted payload, detected via checksum.
In the Fastmap case the former method won't trigger as no scan
happened and UBI automatically thinks all LEBs are present.
Only by reading data from a LEB it detects that the volume header is
missing and incorrectly treats this as fatal error.
To deal with the situation ubi_eba_read_leb() from now on checks
whether we attached via Fastmap and handles the absence of a
volume header like a data corruption error.
This way interrupted static volume updates will correctly get detected
also when Fastmap is used.

Cc: <stable@vger.kernel.org>
Reported-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Tested-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Signed-off-by: Richard Weinberger <richard@nod.at>
2016-05-24 15:24:37 +02:00
Richard Weinberger
1e0a74f10d UBI: Don't read back all data in ubi_eba_copy_leb()
Drop this paranoia check from the old days.
If our MTD driver or the flash is so bad that we even cannot
trust it to write data we have bigger problems.

If one really does not trust the flash and wants write-verify
she can enable UBI io checks using debugfs.

Signed-off-by: Richard Weinberger <richard@nod.at>
2016-05-24 15:21:01 +02:00
Richard Weinberger
5347417e56 UBI: Fix debug message
We have to use j instead of i. i is the volume id
and not the block.

Reported-by: Alexander.Block@continental-corporation.com
Signed-off-by: Richard Weinberger <richard@nod.at>
Acked-by: Brian Norris <computersforpeace@gmail.com>
2015-10-03 20:09:55 +02:00
Richard Weinberger
111ab0b26f UBI: Fastmap: Locking updates
a) Rename ubi->fm_sem to ubi->fm_eba_sem as this semaphore
protects EBA changes.
b) Turn ubi->fm_mutex into a rw semaphore. It will still serialize
fastmap writes but also ensures that ubi_wl_put_peb() is not
interrupted by a fastmap write. We use a rw semaphore to allow
ubi_wl_put_peb() still to be executed in parallel if no fastmap
write is happening.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:46:02 +01:00
Richard Weinberger
8fb2a51478 UBI: Fastmap: Fix race after ubi_wl_get_peb()
ubi_wl_get_peb() returns a fresh PEB which can be used by
user of UBI. Due to the pool logic fastmap will correctly
map this PEB upon attach time because it will be scanned.

If a new fastmap is written (due to heavy parallel io)
while the before the fresh PEB is assigned to the EBA table
it will not be scanned as it is no longer in the pool.
So, the race window exists between ubi_wl_get_peb()
and the EBA table assignment.
We have to make sure that no new fastmap can be written
while that.

To ensure that ubi_wl_get_peb() will grab ubi->fm_sem in read mode
and the user of ubi_wl_get_peb() has to release it after the PEB
got assigned.

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:46:01 +01:00
Richard Weinberger
36a87e44f6 UBI: Fastmap: Fix race in ubi_eba_atomic_leb_change()
This function a) requests a new PEB, b) writes data to it,
c) returns the old PEB and d) registers the new PEB in the EBA table.

For the non-fastmap case this works perfectly fine and is powercut safe.
Is fastmap enabled this can lead to issues.
If a new fastmap is written between a) and c) the freshly requested PEB
is no longer in a pool and will not be scanned upon attaching.
If now a powercut happens between c) and d) the freshly requested PEB
will not be scanned and the old one got already scheduled for erase.
After attaching the EBA table will point to a erased PEB.

Fix this issue by swapping steps c) and d).

Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 22:45:58 +01:00
Brian Norris
d74adbdb9a UBI: fix out of bounds write
If aeb->len >= vol->reserved_pebs, we should not be writing aeb into the
PEB->LEB mapping.

Caught by Coverity, CID #711212.

Cc: stable <stable@vger.kernel.org>
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-26 12:07:17 +01:00
Brian Norris
b388e6a7a6 UBI: fix missing brace control flow
commit 0e707ae79b ("UBI: do propagate positive error codes up") seems
to have produced an unintended change in the control flow here.

Completely untested, but it looks obvious.

Caught by Coverity, which didn't like the indentation. CID 1271184.

Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2015-02-23 22:17:24 +01:00
Artem Bityutskiy
0e707ae79b UBI: do propagate positive error codes up
UBI uses positive function return codes internally, and should not propagate
them up, except in the place this path fixes. Here is the original bug report
from Dan Carpenter:

The problem is really in ubi_eba_read_leb().

drivers/mtd/ubi/eba.c
   412                  err = ubi_io_read_vid_hdr(ubi, pnum, vid_hdr, 1);
   413                  if (err && err != UBI_IO_BITFLIPS) {
   414                          if (err > 0) {
   415                                  /*
   416                                   * The header is either absent or corrupted.
   417                                   * The former case means there is a bug -
   418                                   * switch to read-only mode just in case.
   419                                   * The latter case means a real corruption - we
   420                                   * may try to recover data. FIXME: but this is
   421                                   * not implemented.
   422                                   */
   423                                  if (err == UBI_IO_BAD_HDR_EBADMSG ||
   424                                      err == UBI_IO_BAD_HDR) {
   425                                          ubi_warn("corrupted VID header at PEB %d, LEB %d:%d",
   426                                                   pnum, vol_id, lnum);
   427                                          err = -EBADMSG;
   428                                  } else
   429                                          ubi_ro_mode(ubi);

On this path we return UBI_IO_FF and UBI_IO_FF_BITFLIPS and it
eventually gets passed to ERR_PTR().  We probably dereference the bad
pointer and oops.  At that point we've gone read only so it was already
a bad situation...

   430                          }
   431                          goto out_free;
   432                  } else if (err == UBI_IO_BITFLIPS)
   433                          scrub = 1;
   434

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2015-01-28 16:09:25 +01:00
Richard Weinberger
9ff08979e1 UBI: Add initial support for scatter gather
Adds a new set of functions to deal with scatter gather.
ubi_eba_read_leb_sg() will read from a LEB into a scatter gather list.
The new data structure struct ubi_sgl will be used within UBI to
hold the scatter gather list itself and metadata to have a cursor
within the list.

Signed-off-by: Richard Weinberger <richard@nod.at>
Tested-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
Reviewed-by: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
2015-01-28 16:04:26 +01:00
Tanya Brokhman
3260870331 UBI: Extend UBI layer debug/messaging capabilities
If there is more then one UBI device mounted, there is no way to
distinguish between messages from different UBI devices.
Add device number to all ubi layer message types.

The R/O block driver messages were replaced by pr_* since
ubi_device structure is not used by it.

Amended a bit by Artem.

Signed-off-by: Tanya Brokhman <tlinder@codeaurora.org>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2014-11-07 12:08:51 +02:00
Richard Weinberger
170505f58f UBI: ubi_eba_read_leb: Remove in vain variable assignment
There is no need to set err, it will be overwritten in any case
later at:
        if (scrub)
                err = ubi_wl_scrub_peb(ubi, pnum);

Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2014-09-26 13:42:41 +03:00
Richard Weinberger
8974b15c6e UBI: Wire-up ->fm_sem
Fastmap uses ->fm_sem to stop EBA changes while writing
a new fastmap.

Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-10-03 12:29:37 +03:00
Richard Weinberger
00abf30415 UBI: Add self_check_eba()
self_check_eba() compares two ubi_attach_info objects.
Fastmap uses this function for self checks.

Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-10-03 12:29:37 +03:00
Richard Weinberger
a730665370 UBI: Export next_sqnum()
Fastmap needs next_sqnum(), rename it to ubi_next_sqnum()
and make it non-static.

Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-10-03 12:29:37 +03:00
Artem Bityutskiy
049333cecb UBI: comply with coding style
Join all the split printk lines in order to stop checkpatch complaining.

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-09-04 09:39:01 +03:00
Joel Reardon
d36e59e69b UBI: add lnum and vol_id to struct ubi_work
This is part of a multipart patch to allow UBI to force the erasure of
particular logical eraseblock numbers. In this patch, the volume id and LEB
number are added to ubi_work data structure, and both are also passed as a
parameter to schedule erase to set it appropriately. Whenever ubi_wl_put_peb
is called, the lnum is also passed to be forwarded to schedule erase. Later,
a new ubi_sync_lnum will be added to execute immediately all work related to
that lnum.

This was tested by outputting the vol_id and lnum during the schedule of
erasure. The ubi thread was disabled and two ubifs drives on separate
partitions repeated changed a small number of LEBs. The ubi module was readded,
and all the erased LEBs, corresponding to the volumes, were added to the
schedule erase queue.

Artem: minor tweaks

Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-05-21 11:34:41 +03:00
Artem Bityutskiy
41e0cd9d4e UBI: rename _init_scan functions
We have a couple of initialization funcntionsn left which have "_scan" suffic -
rename them:

ubi_eba_init_scan() -> ubi_eba_init()
ubi_wl_init_scan() -> ubi_wl_init()

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-05-20 20:26:04 +03:00
Artem Bityutskiy
0bae2887a7 UBI: rename ubi_scan_move_to_list
The old name is not logical anymore - rename it to 'ubi_move_aeb_to_list()'.

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-05-20 20:26:03 +03:00
Artem Bityutskiy
dcd85fdd10 UBI: rename ubi_scan_find_av
The old name is not logical anymore - rename it to 'ubi_find_av()'.

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-05-20 20:26:03 +03:00
Artem Bityutskiy
517af48c05 UBI: rename sv to av
After re-naming the 'struct ubi_scan_volume' we should adjust all variables
named 'sv' to something else, because 'sv' stands for "scanning volume".
Let's rename it to 'av' which stands for "attaching volume" which is
a bit more consistent and has the same length, which makes re-naming easy.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
2012-05-20 20:26:02 +03:00
Artem Bityutskiy
a4e6042f1d UBI: rename si to ai
After re-naming the 'struct ubi_scan_info' we should adjust all variables
named 'si' to something else, because 'si' stands for "scanning info".
Let's rename it to 'ai' which stands for "attaching info" which is
a bit more consistent and has the same length, which makes re-naming easy.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
2012-05-20 20:26:02 +03:00
Artem Bityutskiy
2c5ec5ce66 UBI: rename seb to aeb
After re-naming the 'struct ubi_scan_leb' we should adjust all variables
named 'seb' to something else, because 'seb' stands for "scanning eraseblock".
Let's rename it to 'aeb' which stands for "attaching eraseblock" which is
a bit more consistend and has the same length.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
2012-05-20 20:26:02 +03:00
Artem Bityutskiy
afc15a814b UBI: rename struct ubi_scan_info
Rename 'struct ubi_scan_info' to 'struct ubi_attach_info'. This is part
of the code re-structuring I am trying to do in order to add fastmap
in a more logical way. Fastmap can share a lot with scanning, including
the attach-time data structures, which all now have "scan" word in the
name. Let's get rid of this word.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
2012-05-20 20:26:01 +03:00
Artem Bityutskiy
cb28a9322d UBI: rename struct ubi_scan_volume
Rename 'struct ubi_scan_volume' to 'struct ubi_ainf_volume'. This is part
of the code re-structuring I am trying to do in order to add fastmap
in a more logical way. Fastmap can share a lot with scanning, including
the attach-time data structures, which all now have "scan" word in the
name. Let's get rid of this word and use "ainf" instead which stands
for "attach information". It has the same length as "scan" so re-naming
is trivial.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
2012-05-20 20:26:01 +03:00
Artem Bityutskiy
227423d241 UBI: rename struct ubi_scan_leb
Rename 'struct ubi_scan_leb' to 'struct ubi_ainf_leb'. This is part
of the code re-structuring I am trying to do in order to add fastmap
in a more logical way. Fastmap can share a lot with scanning, including
the attach-time data structures, which all now have "scan" word in the
name. Let's get rid of this word and use "ainf" instead which stands
for "attach information". It has the same length as "scan" so re-naming
is trivial.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
2012-05-20 20:26:01 +03:00
Richard Weinberger
b36a261e8c UBI: Kill data type hint
We do not need this feature and to our shame it even was not working
and there was a bug found very recently.
	-- Artem Bityutskiy

Without the data type hint UBI2 (fastmap) will be easier to implement.

Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-05-20 20:25:59 +03:00
Artem Bityutskiy
cc831464f8 UBI: rename MOVE_CANCEL_BITFLIPS to MOVE_TARGET_BITFLIPS
While looking at a problem reported by UBI around the PEB moving area I
noticed that the 'MOVE_CANCEL_BITFLIPS' is a bit inconsistent name and
'MOVE_TARGET_BITFLIPS' better - let's rename it.

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-03-09 10:31:18 +02:00
Artem Bityutskiy
0ca39d74de UBI: rename peb_buf1 to peb_buf
Now we have only one buffer so let's rename it to just 'peb_buf1'.

Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-03-09 09:39:31 +02:00
Josselin Costanzi
43b043e78b UBI: reduce memory consumption
Remove the pre-allocated 'peb_buf2' buffer because we do not really need it.
The only reason UBI has it is to check that the data were written correctly.
But we do not have to have 2 buffers for this and waste RAM - we can just
compare CRC checksums instead. This reduces UBI memory consumption.

Artem bityutskiy: massaged the patch and commit message

Signed-off-by: Josselin Costanzi <josselin.costanzi@mobile-devices.fr>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2012-03-09 09:39:31 +02:00
Bhavesh Parekh
e801e128b2 UBI: fix missing scrub when there is a bit-flip
Under some cases, when scrubbing the PEB if we did not get the lock on
the PEB it fails to scrub. Add that PEB again to the scrub list

Artem: minor amendments.

Cc: stable@kernel.org [2.6.31+]
Signed-off-by: Bhavesh Parekh <bparekh@nvidia.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
2011-11-30 17:43:42 +05:30
Brian Norris
d57f40544a mtd: utilize `mtd_is_*()' functions
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@intel.com>
2011-09-21 09:19:06 +03:00
Artem Bityutskiy
5fc01ab693 UBI: preserve corrupted PEBs
Currently UBI erases all corrupted eraseblocks, irrespectively of the nature
of corruption: corruption due to power cuts and non-power cut corruption.
The former case is OK, but the latter is not, because UBI may destroy
potentially important data.

With this patch, during scanning, when UBI hits a PEB with corrupted VID
header, it checks whether this PEB contains only 0xFF data. If yes, it is
safe to erase this PEB and it is put to the 'erase' list. If not, this may
be important data and it is better to avoid erasing this PEB. Instead,
UBI puts it to the corr list and moves out of the pool of available PEB.
IOW, UBI preserves this PEB.

Such corrupted PEB lessen the amount of available PEBs. So the more of them
we accumulate, the less PEBs are available. The maximum amount of non-power
cut corrupted PEBs is 8.

This patch is a response to UBIFS problem where reporter
(Matthew L. Creech <mlcreech@gmail.com>) observes that UBIFS index points
to an unmapped LEB. The theory is that corresponding PEB somehow got
corrupted and UBI wiped it. This patch (actually a series of patches)
tries to make sure such PEBs are preserved - this would make it is easier
to analyze the corruption.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
2010-10-19 17:19:57 +03:00
Artem Bityutskiy
756e1df1d2 UBI: rename IO error code
Rename UBI_IO_BAD_HDR_READ into UBI_IO_BAD_HDR_EBADMSG which is presumably more
self-documenting and readable. Indeed, the '_READ' suffix does not tell much and
even confuses, while '_EBADMSG' tells about uncorrectable ECC error, because we
use -EBADMSG all over the place to represent ECC errors.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
2010-10-19 17:19:56 +03:00
Artem Bityutskiy
64d4b4c90a UBI: do not warn unnecessarily
Currently, when UBI attaches an MTD device and cannot reserve all 1% (by
default) of PEBs for bad eraseblocks handling, it prints a warning. However,
Matthew L. Creech <mlcreech@gmail.com> is not very happy to see this warning,
because he did reserve enough of PEB at the beginning, but with time some
PEBs became bad. The warning is not necessary in this case.

This patch makes UBI print the warning
 o if this is a new image
 o of this is used image and the amount of reserved PEBs is only 10% (or less)
   of the size of the reserved PEB pool.

Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
2010-08-02 07:21:19 +03:00