forked from Minki/linux
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and resync IO. It suspends all normal IO when resync/recovery happens. However if normal IO is out side the resync window, there is no contention. So this patch changes the barrier mechanism to only block IO that could contend with the resync that is currently happening. We partition the whole space into five parts. |---------|-----------|------------|----------------|-------| start next_resync start_next_window end_window start + RESYNC_WINDOW = next_resync next_resync + NEXT_NORMALIO_DISTANCE = start_next_window start_next_window + NEXT_NORMALIO_DISTANCE = end_window Firstly we introduce some concepts: 1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the same time. A sync request is RESYNC_BLOCK_SIZE(64*1024). So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB. 2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync and start_next_window. It also indicates the distance between start_next_window and end_window. It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if this turned out not to be optimal. 3 - next_resync: the next sector at which we will do sync IO. 4 - start: a position which is at most RESYNC_WINDOW before next_resync. 5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE beyond next_resync. Normal-io after this position doesn't need to wait for resync-io to complete. 6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond next_resync. This also doesn't need to wait, but is counted differently. 7 - current_window_requests: the count of normalIO between start_next_window and end_window. 8 - next_window_requests: the count of normalIO after end_window. NormalIO will be partitioned into four types: NormIO1: the end sector of bio is smaller or equal the start NormIO2: the start sector of bio larger or equal to end_window NormIO3: the start sector of bio larger or equal to start_next_window. NormIO4: the location between start_next_window and end_window |--------|-----------|--------------------|----------------|-------------| | start | next_resync | start_next_window | end_window | NormIO1 NormIO4 NormIO4 NormIO3 NormIO2 For NormIO1, we don't need any io barrier. For NormIO4, we used a similar approach to the original iobarrier mechanism. The normalIO and resyncIO must be kept separate. For NormIO2/3, we add two fields to struct r1conf: "current_window_requests" and "next_window_requests". They indicate the count of active requests in the two window. For these, we don't wait for resync io to complete. For resync action, if there are NormIO4s, we must wait for it. If not, we can proceed. But if resync action reaches start_next_window and current_window_requests > 0 (that is there are NormIO3s), we must wait until the current_window_requests becomes zero. When current_window_requests becomes zero, start_next_window also moves forward. Then current_window_requests will replaced by next_window_requests. There is a problem which when and how to change from NormIO2 to NormIO3. Only then can sync action progress. We add a field in struct r1conf "start_next_window". A: if start_next_window == MaxSector, it means there are no NormIO2/3. So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE B: if current_window_requests == 0 && next_window_requests != 0, it means start_next_window move to end_window There is another problem which how to differentiate between old NormIO2(now it is NormIO3) and NormIO2. For example, there are many bios which are NormIO2 and a bio which is NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3. We add a field in struct r1bio "start_next_window". This is used to record the position conf->start_next_window when the call to wait_barrier() is made in make_request(). In allow_barrier(), we check the conf->start_next_window. If r1bio->stat_next_window == conf->start_next_window, it means there is no transition between NormIO2 and NormIO3. If r1bio->start_next_window != conf->start_next_window, it mean there was a transition between NormIO2 and NormIO3. There can only have been one transition. So it only means the bio is old NormIO2. For one bio, there may be many r1bio's. So we make sure all the r1bio->start_next_window are the same value. If we met blocked_dev in make_request(), it must call allow_barrier and wait_barrier. So the former and the later value of conf->start_next_window will be change. If there are many r1bio's with differnet start_next_window, for the relevant bio, it depend on the last value of r1bio. It will cause error. To avoid this, we must wait for previous r1bios to complete. Signed-off-by: Jianpeng Ma <majianpeng@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
This commit is contained in:
parent
8e005f7c02
commit
79ef3a8aa1
@ -66,7 +66,8 @@
|
|||||||
*/
|
*/
|
||||||
static int max_queued_requests = 1024;
|
static int max_queued_requests = 1024;
|
||||||
|
|
||||||
static void allow_barrier(struct r1conf *conf);
|
static void allow_barrier(struct r1conf *conf, sector_t start_next_window,
|
||||||
|
sector_t bi_sector);
|
||||||
static void lower_barrier(struct r1conf *conf);
|
static void lower_barrier(struct r1conf *conf);
|
||||||
|
|
||||||
static void * r1bio_pool_alloc(gfp_t gfp_flags, void *data)
|
static void * r1bio_pool_alloc(gfp_t gfp_flags, void *data)
|
||||||
@ -227,6 +228,8 @@ static void call_bio_endio(struct r1bio *r1_bio)
|
|||||||
struct bio *bio = r1_bio->master_bio;
|
struct bio *bio = r1_bio->master_bio;
|
||||||
int done;
|
int done;
|
||||||
struct r1conf *conf = r1_bio->mddev->private;
|
struct r1conf *conf = r1_bio->mddev->private;
|
||||||
|
sector_t start_next_window = r1_bio->start_next_window;
|
||||||
|
sector_t bi_sector = bio->bi_sector;
|
||||||
|
|
||||||
if (bio->bi_phys_segments) {
|
if (bio->bi_phys_segments) {
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
@ -234,6 +237,11 @@ static void call_bio_endio(struct r1bio *r1_bio)
|
|||||||
bio->bi_phys_segments--;
|
bio->bi_phys_segments--;
|
||||||
done = (bio->bi_phys_segments == 0);
|
done = (bio->bi_phys_segments == 0);
|
||||||
spin_unlock_irqrestore(&conf->device_lock, flags);
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
||||||
|
/*
|
||||||
|
* make_request() might be waiting for
|
||||||
|
* bi_phys_segments to decrease
|
||||||
|
*/
|
||||||
|
wake_up(&conf->wait_barrier);
|
||||||
} else
|
} else
|
||||||
done = 1;
|
done = 1;
|
||||||
|
|
||||||
@ -245,7 +253,7 @@ static void call_bio_endio(struct r1bio *r1_bio)
|
|||||||
* Wake up any possible resync thread that waits for the device
|
* Wake up any possible resync thread that waits for the device
|
||||||
* to go idle.
|
* to go idle.
|
||||||
*/
|
*/
|
||||||
allow_barrier(conf);
|
allow_barrier(conf, start_next_window, bi_sector);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -827,10 +835,19 @@ static void raise_barrier(struct r1conf *conf)
|
|||||||
/* block any new IO from starting */
|
/* block any new IO from starting */
|
||||||
conf->barrier++;
|
conf->barrier++;
|
||||||
|
|
||||||
/* Now wait for all pending IO to complete */
|
/* For these conditions we must wait:
|
||||||
|
* A: while the array is in frozen state
|
||||||
|
* B: while barrier >= RESYNC_DEPTH, meaning resync reach
|
||||||
|
* the max count which allowed.
|
||||||
|
* C: next_resync + RESYNC_SECTORS > start_next_window, meaning
|
||||||
|
* next resync will reach to the window which normal bios are
|
||||||
|
* handling.
|
||||||
|
*/
|
||||||
wait_event_lock_irq(conf->wait_barrier,
|
wait_event_lock_irq(conf->wait_barrier,
|
||||||
!conf->array_frozen &&
|
!conf->array_frozen &&
|
||||||
!conf->nr_pending && conf->barrier < RESYNC_DEPTH,
|
conf->barrier < RESYNC_DEPTH &&
|
||||||
|
(conf->start_next_window >=
|
||||||
|
conf->next_resync + RESYNC_SECTORS),
|
||||||
conf->resync_lock);
|
conf->resync_lock);
|
||||||
|
|
||||||
spin_unlock_irq(&conf->resync_lock);
|
spin_unlock_irq(&conf->resync_lock);
|
||||||
@ -846,10 +863,33 @@ static void lower_barrier(struct r1conf *conf)
|
|||||||
wake_up(&conf->wait_barrier);
|
wake_up(&conf->wait_barrier);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void wait_barrier(struct r1conf *conf)
|
static bool need_to_wait_for_sync(struct r1conf *conf, struct bio *bio)
|
||||||
{
|
{
|
||||||
|
bool wait = false;
|
||||||
|
|
||||||
|
if (conf->array_frozen || !bio)
|
||||||
|
wait = true;
|
||||||
|
else if (conf->barrier && bio_data_dir(bio) == WRITE) {
|
||||||
|
if (conf->next_resync < RESYNC_WINDOW_SECTORS)
|
||||||
|
wait = true;
|
||||||
|
else if ((conf->next_resync - RESYNC_WINDOW_SECTORS
|
||||||
|
>= bio_end_sector(bio)) ||
|
||||||
|
(conf->next_resync + NEXT_NORMALIO_DISTANCE
|
||||||
|
<= bio->bi_sector))
|
||||||
|
wait = false;
|
||||||
|
else
|
||||||
|
wait = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
return wait;
|
||||||
|
}
|
||||||
|
|
||||||
|
static sector_t wait_barrier(struct r1conf *conf, struct bio *bio)
|
||||||
|
{
|
||||||
|
sector_t sector = 0;
|
||||||
|
|
||||||
spin_lock_irq(&conf->resync_lock);
|
spin_lock_irq(&conf->resync_lock);
|
||||||
if (conf->barrier) {
|
if (need_to_wait_for_sync(conf, bio)) {
|
||||||
conf->nr_waiting++;
|
conf->nr_waiting++;
|
||||||
/* Wait for the barrier to drop.
|
/* Wait for the barrier to drop.
|
||||||
* However if there are already pending
|
* However if there are already pending
|
||||||
@ -863,21 +903,65 @@ static void wait_barrier(struct r1conf *conf)
|
|||||||
wait_event_lock_irq(conf->wait_barrier,
|
wait_event_lock_irq(conf->wait_barrier,
|
||||||
!conf->array_frozen &&
|
!conf->array_frozen &&
|
||||||
(!conf->barrier ||
|
(!conf->barrier ||
|
||||||
(conf->nr_pending &&
|
((conf->start_next_window <
|
||||||
|
conf->next_resync + RESYNC_SECTORS) &&
|
||||||
current->bio_list &&
|
current->bio_list &&
|
||||||
!bio_list_empty(current->bio_list))),
|
!bio_list_empty(current->bio_list))),
|
||||||
conf->resync_lock);
|
conf->resync_lock);
|
||||||
conf->nr_waiting--;
|
conf->nr_waiting--;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (bio && bio_data_dir(bio) == WRITE) {
|
||||||
|
if (conf->next_resync + NEXT_NORMALIO_DISTANCE
|
||||||
|
<= bio->bi_sector) {
|
||||||
|
if (conf->start_next_window == MaxSector)
|
||||||
|
conf->start_next_window =
|
||||||
|
conf->next_resync +
|
||||||
|
NEXT_NORMALIO_DISTANCE;
|
||||||
|
|
||||||
|
if ((conf->start_next_window + NEXT_NORMALIO_DISTANCE)
|
||||||
|
<= bio->bi_sector)
|
||||||
|
conf->next_window_requests++;
|
||||||
|
else
|
||||||
|
conf->current_window_requests++;
|
||||||
|
}
|
||||||
|
if (bio->bi_sector >= conf->start_next_window)
|
||||||
|
sector = conf->start_next_window;
|
||||||
|
}
|
||||||
|
|
||||||
conf->nr_pending++;
|
conf->nr_pending++;
|
||||||
spin_unlock_irq(&conf->resync_lock);
|
spin_unlock_irq(&conf->resync_lock);
|
||||||
|
return sector;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void allow_barrier(struct r1conf *conf)
|
static void allow_barrier(struct r1conf *conf, sector_t start_next_window,
|
||||||
|
sector_t bi_sector)
|
||||||
{
|
{
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
spin_lock_irqsave(&conf->resync_lock, flags);
|
spin_lock_irqsave(&conf->resync_lock, flags);
|
||||||
conf->nr_pending--;
|
conf->nr_pending--;
|
||||||
|
if (start_next_window) {
|
||||||
|
if (start_next_window == conf->start_next_window) {
|
||||||
|
if (conf->start_next_window + NEXT_NORMALIO_DISTANCE
|
||||||
|
<= bi_sector)
|
||||||
|
conf->next_window_requests--;
|
||||||
|
else
|
||||||
|
conf->current_window_requests--;
|
||||||
|
} else
|
||||||
|
conf->current_window_requests--;
|
||||||
|
|
||||||
|
if (!conf->current_window_requests) {
|
||||||
|
if (conf->next_window_requests) {
|
||||||
|
conf->current_window_requests =
|
||||||
|
conf->next_window_requests;
|
||||||
|
conf->next_window_requests = 0;
|
||||||
|
conf->start_next_window +=
|
||||||
|
NEXT_NORMALIO_DISTANCE;
|
||||||
|
} else
|
||||||
|
conf->start_next_window = MaxSector;
|
||||||
|
}
|
||||||
|
}
|
||||||
spin_unlock_irqrestore(&conf->resync_lock, flags);
|
spin_unlock_irqrestore(&conf->resync_lock, flags);
|
||||||
wake_up(&conf->wait_barrier);
|
wake_up(&conf->wait_barrier);
|
||||||
}
|
}
|
||||||
@ -1012,6 +1096,7 @@ static void make_request(struct mddev *mddev, struct bio * bio)
|
|||||||
int first_clone;
|
int first_clone;
|
||||||
int sectors_handled;
|
int sectors_handled;
|
||||||
int max_sectors;
|
int max_sectors;
|
||||||
|
sector_t start_next_window;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Register the new request and wait if the reconstruction
|
* Register the new request and wait if the reconstruction
|
||||||
@ -1041,7 +1126,7 @@ static void make_request(struct mddev *mddev, struct bio * bio)
|
|||||||
finish_wait(&conf->wait_barrier, &w);
|
finish_wait(&conf->wait_barrier, &w);
|
||||||
}
|
}
|
||||||
|
|
||||||
wait_barrier(conf);
|
start_next_window = wait_barrier(conf, bio);
|
||||||
|
|
||||||
bitmap = mddev->bitmap;
|
bitmap = mddev->bitmap;
|
||||||
|
|
||||||
@ -1162,6 +1247,7 @@ read_again:
|
|||||||
|
|
||||||
disks = conf->raid_disks * 2;
|
disks = conf->raid_disks * 2;
|
||||||
retry_write:
|
retry_write:
|
||||||
|
r1_bio->start_next_window = start_next_window;
|
||||||
blocked_rdev = NULL;
|
blocked_rdev = NULL;
|
||||||
rcu_read_lock();
|
rcu_read_lock();
|
||||||
max_sectors = r1_bio->sectors;
|
max_sectors = r1_bio->sectors;
|
||||||
@ -1230,14 +1316,24 @@ read_again:
|
|||||||
if (unlikely(blocked_rdev)) {
|
if (unlikely(blocked_rdev)) {
|
||||||
/* Wait for this device to become unblocked */
|
/* Wait for this device to become unblocked */
|
||||||
int j;
|
int j;
|
||||||
|
sector_t old = start_next_window;
|
||||||
|
|
||||||
for (j = 0; j < i; j++)
|
for (j = 0; j < i; j++)
|
||||||
if (r1_bio->bios[j])
|
if (r1_bio->bios[j])
|
||||||
rdev_dec_pending(conf->mirrors[j].rdev, mddev);
|
rdev_dec_pending(conf->mirrors[j].rdev, mddev);
|
||||||
r1_bio->state = 0;
|
r1_bio->state = 0;
|
||||||
allow_barrier(conf);
|
allow_barrier(conf, start_next_window, bio->bi_sector);
|
||||||
md_wait_for_blocked_rdev(blocked_rdev, mddev);
|
md_wait_for_blocked_rdev(blocked_rdev, mddev);
|
||||||
wait_barrier(conf);
|
start_next_window = wait_barrier(conf, bio);
|
||||||
|
/*
|
||||||
|
* We must make sure the multi r1bios of bio have
|
||||||
|
* the same value of bi_phys_segments
|
||||||
|
*/
|
||||||
|
if (bio->bi_phys_segments && old &&
|
||||||
|
old != start_next_window)
|
||||||
|
/* Wait for the former r1bio(s) to complete */
|
||||||
|
wait_event(conf->wait_barrier,
|
||||||
|
bio->bi_phys_segments == 1);
|
||||||
goto retry_write;
|
goto retry_write;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1437,11 +1533,14 @@ static void print_conf(struct r1conf *conf)
|
|||||||
|
|
||||||
static void close_sync(struct r1conf *conf)
|
static void close_sync(struct r1conf *conf)
|
||||||
{
|
{
|
||||||
wait_barrier(conf);
|
wait_barrier(conf, NULL);
|
||||||
allow_barrier(conf);
|
allow_barrier(conf, 0, 0);
|
||||||
|
|
||||||
mempool_destroy(conf->r1buf_pool);
|
mempool_destroy(conf->r1buf_pool);
|
||||||
conf->r1buf_pool = NULL;
|
conf->r1buf_pool = NULL;
|
||||||
|
|
||||||
|
conf->next_resync = 0;
|
||||||
|
conf->start_next_window = MaxSector;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int raid1_spare_active(struct mddev *mddev)
|
static int raid1_spare_active(struct mddev *mddev)
|
||||||
@ -2713,6 +2812,9 @@ static struct r1conf *setup_conf(struct mddev *mddev)
|
|||||||
conf->pending_count = 0;
|
conf->pending_count = 0;
|
||||||
conf->recovery_disabled = mddev->recovery_disabled - 1;
|
conf->recovery_disabled = mddev->recovery_disabled - 1;
|
||||||
|
|
||||||
|
conf->start_next_window = MaxSector;
|
||||||
|
conf->current_window_requests = conf->next_window_requests = 0;
|
||||||
|
|
||||||
err = -EIO;
|
err = -EIO;
|
||||||
for (i = 0; i < conf->raid_disks * 2; i++) {
|
for (i = 0; i < conf->raid_disks * 2; i++) {
|
||||||
|
|
||||||
|
@ -41,6 +41,19 @@ struct r1conf {
|
|||||||
*/
|
*/
|
||||||
sector_t next_resync;
|
sector_t next_resync;
|
||||||
|
|
||||||
|
/* When raid1 starts resync, we divide array into four partitions
|
||||||
|
* |---------|--------------|---------------------|-------------|
|
||||||
|
* next_resync start_next_window end_window
|
||||||
|
* start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
|
||||||
|
* end_window = start_next_window + NEXT_NORMALIO_DISTANCE
|
||||||
|
* current_window_requests means the count of normalIO between
|
||||||
|
* start_next_window and end_window.
|
||||||
|
* next_window_requests means the count of normalIO after end_window.
|
||||||
|
* */
|
||||||
|
sector_t start_next_window;
|
||||||
|
int current_window_requests;
|
||||||
|
int next_window_requests;
|
||||||
|
|
||||||
spinlock_t device_lock;
|
spinlock_t device_lock;
|
||||||
|
|
||||||
/* list of 'struct r1bio' that need to be processed by raid1d,
|
/* list of 'struct r1bio' that need to be processed by raid1d,
|
||||||
@ -112,6 +125,7 @@ struct r1bio {
|
|||||||
* in this BehindIO request
|
* in this BehindIO request
|
||||||
*/
|
*/
|
||||||
sector_t sector;
|
sector_t sector;
|
||||||
|
sector_t start_next_window;
|
||||||
int sectors;
|
int sectors;
|
||||||
unsigned long state;
|
unsigned long state;
|
||||||
struct mddev *mddev;
|
struct mddev *mddev;
|
||||||
|
Loading…
Reference in New Issue
Block a user