Two stable@ fixes:

- DM thinp fix to properly advertise discard support as disabled for
   thin devices backed by a thin-pool with discard support disabled.
 
 - DM crypt fix to prevent the creation of bios that violate the
   underlying block device's max_segments limits.  This fixes a
   relatively long-standing NCQ SSD corruption issue reported against
   dm-crypt ever since the dm-crypt cpu parallelization patches were
   merged back in 4.0.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJWAbi6AAoJEMUj8QotnQNaevgH/RvtboJ1QrpyUtdG1FuWrbNb
 ZBfVvq3EOdWtWYZKw57AgMMBvTtcTG94zxHJ2In919RF7oTdVATIo5PJK2aOVfIX
 KiNuaypQAxL7ybQIAHsWbGihcOrLROMzJHkED2X2TKYnAnXhzthEGhGEmvsEOu5v
 R5HmjI634Nv84kH87TO+tP+yFFDjDXaVdt3i5D2srT17SRFe/6WlEBKGshhXmavV
 KHY7zibcfOXiMR01oCgpIoqwd3LUF1w4B+MQhMhF8cBOLF8r7DkMjdqvX0Xn/KVf
 uhzanqGQsBP3aIY0f0+BlEm44+nvq1je7m6bxzRtxSMOcDJJMdEl1eDPYhC7wCM=
 =/czf
 -----END PGP SIGNATURE-----

Merge tag 'dm-4.3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper fixes from Mike Snitzer:
 "Two stable@ fixes:

   - DM thinp fix to properly advertise discard support as disabled for
     thin devices backed by a thin-pool with discard support disabled.

   - DM crypt fix to prevent the creation of bios that violate the
     underlying block device's max_segments limits.  This fixes a
     relatively long-standing NCQ SSD corruption issue reported against
     dm-crypt ever since the dm-crypt cpu parallelization patches were
     merged back in 4.0"

* tag 'dm-4.3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm crypt: constrain crypt device's max_segment_size to PAGE_SIZE
  dm thin: disable discard support for thin devices if pool's is disabled
This commit is contained in:
Linus Torvalds 2015-09-24 11:04:22 -07:00
commit 5146c8e4df
2 changed files with 19 additions and 2 deletions

View File

@ -968,7 +968,8 @@ static void crypt_free_buffer_pages(struct crypt_config *cc, struct bio *clone);
/*
* Generate a new unfragmented bio with the given size
* This should never violate the device limitations
* This should never violate the device limitations (but only because
* max_segment_size is being constrained to PAGE_SIZE).
*
* This function may be called concurrently. If we allocate from the mempool
* concurrently, there is a possibility of deadlock. For example, if we have
@ -2045,9 +2046,20 @@ static int crypt_iterate_devices(struct dm_target *ti,
return fn(ti, cc->dev, cc->start, ti->len, data);
}
static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
{
/*
* Unfortunate constraint that is required to avoid the potential
* for exceeding underlying device's max_segments limits -- due to
* crypt_alloc_buffer() possibly allocating pages for the encryption
* bio that are not as physically contiguous as the original bio.
*/
limits->max_segment_size = PAGE_SIZE;
}
static struct target_type crypt_target = {
.name = "crypt",
.version = {1, 14, 0},
.version = {1, 14, 1},
.module = THIS_MODULE,
.ctr = crypt_ctr,
.dtr = crypt_dtr,
@ -2058,6 +2070,7 @@ static struct target_type crypt_target = {
.resume = crypt_resume,
.message = crypt_message,
.iterate_devices = crypt_iterate_devices,
.io_hints = crypt_io_hints,
};
static int __init dm_crypt_init(void)

View File

@ -4249,6 +4249,10 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits)
{
struct thin_c *tc = ti->private;
struct pool *pool = tc->pool;
struct queue_limits *pool_limits = dm_get_queue_limits(pool->pool_md);
if (!pool_limits->discard_granularity)
return; /* pool's discard support is disabled */
limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT;
limits->max_discard_sectors = 2048 * 1024 * 16; /* 16G */