qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] bb6756: test-bdrv-drain: bdrv_drain() works w


From: GitHub
Subject: [Qemu-commits] [qemu/qemu] bb6756: test-bdrv-drain: bdrv_drain() works with cross-Aio...
Date: Tue, 19 Jun 2018 08:57:19 -0700

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: bb6756895459f181e2f25e877d3d7a10c297b5c8
      
https://github.com/qemu/qemu/commit/bb6756895459f181e2f25e877d3d7a10c297b5c8
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/io.c
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: bdrv_drain() works with cross-AioContext events

As long as nobody keeps the other I/O thread from working, there is no
reason why bdrv_drain() wouldn't work with cross-AioContext events. The
key is that the root request we're waiting for is in the AioContext
we're polling (which it always is for bdrv_drain()) so that aio_poll()
is woken up in the end.

Add a test case that shows that it works. Remove the comment in
bdrv_drain() that claims otherwise.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 79ab8b21dc19c08adc407504e456ff64b9dacb66
      
https://github.com/qemu/qemu/commit/79ab8b21dc19c08adc407504e456ff64b9dacb66
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/io.c
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  block: Use bdrv_do_drain_begin/end in bdrv_drain_all()

bdrv_do_drain_begin/end() implement already everything that
bdrv_drain_all_begin/end() need and currently still do manually: Disable
external events, call parent drain callbacks, call block driver
callbacks.

It also does two more things:

The first is incrementing bs->quiesce_counter. bdrv_drain_all() already
stood out in the test case by behaving different from the other drain
variants. Adding this is not only safe, but in fact a bug fix.

The second is calling bdrv_drain_recurse(). We already do that later in
the same function in a loop, so basically doing an early first iteration
doesn't hurt.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Stefan Hajnoczi <address@hidden>


  Commit: 7d40d9ef9dfb4948a857bfc6ec8408eed1d1d9e7
      
https://github.com/qemu/qemu/commit/7d40d9ef9dfb4948a857bfc6ec8408eed1d1d9e7
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/io.c

  Log Message:
  -----------
  block: Remove 'recursive' parameter from bdrv_drain_invoke()

All callers pass false for the 'recursive' parameter now. Remove it.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Stefan Hajnoczi <address@hidden>


  Commit: c13ad59f012cbbccb866a10477458e69bc868dbb
      
https://github.com/qemu/qemu/commit/c13ad59f012cbbccb866a10477458e69bc868dbb
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/io.c

  Log Message:
  -----------
  block: Don't manually poll in bdrv_drain_all()

All involved nodes are already idle, we called bdrv_do_drain_begin() on
them.

The comment in the code suggested that this was not correct because the
completion of a request on one node could spawn a new request on a
different node (which might have been drained before, so we wouldn't
drain the new request). In reality, new requests to different nodes
aren't spawned out of nothing, but only in the context of a parent
request, and they aren't submitted to random nodes, but only to child
nodes. As long as we still poll for the completion of the parent request
(which we do), draining each root node separately is good enough.

Remove the additional polling code from bdrv_drain_all_begin() and
replace it with an assertion that all nodes are already idle after we
drained them separately.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Stefan Hajnoczi <address@hidden>


  Commit: 6d0252f2f9cb49925deb1c41101462c9481dfc90
      
https://github.com/qemu/qemu/commit/6d0252f2f9cb49925deb1c41101462c9481dfc90
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  tests/test-bdrv-drain: bdrv_drain_all() works in coroutines now

Since we use bdrv_do_drained_begin/end() for bdrv_drain_all_begin/end(),
coroutine context is automatically left with a BH, preventing the
deadlocks that made bdrv_drain_all*() unsafe in coroutine context. Now
that we even removed the old polling code as dead code, it's obvious
that it's compatible now.

Enable the coroutine test cases for bdrv_drain_all().

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Stefan Hajnoczi <address@hidden>


  Commit: 1cc8e54ada97f7ac479554e15ca9e426c895b158
      
https://github.com/qemu/qemu/commit/1cc8e54ada97f7ac479554e15ca9e426c895b158
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/io.c
    M include/block/aio-wait.h

  Log Message:
  -----------
  block: Avoid unnecessary aio_poll() in AIO_WAIT_WHILE()

Commit 91af091f923 added an additional aio_poll() to BDRV_POLL_WHILE()
in order to make sure that all pending BHs are executed on drain. This
was the wrong place to make the fix, as it is useless overhead for all
other users of the macro and unnecessarily complicates the mechanism.

This patch effectively reverts said commit (the context has changed a
bit and the code has moved to AIO_WAIT_WHILE()) and instead polls in the
loop condition for drain.

The effect is probably hard to measure in any real-world use case
because actual I/O will dominate, but if I run only the initialisation
part of 'qemu-img convert' where it calls bdrv_block_status() for the
whole image to find out how much data there is copy, this phase actually
needs only roughly half the time after this patch.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Stefan Hajnoczi <address@hidden>


  Commit: 89bd030533e3592ca0a995450dcfc5d53e459e20
      
https://github.com/qemu/qemu/commit/89bd030533e3592ca0a995450dcfc5d53e459e20
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block.c
    M block/io.c
    M block/mirror.c
    M blockjob.c
    M include/block/block.h
    M include/block/block_int.h
    M include/block/blockjob_int.h
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  block: Really pause block jobs on drain

We already requested that block jobs be paused in .bdrv_drained_begin,
but no guarantee was made that the job was actually inactive at the
point where bdrv_drained_begin() returned.

This introduces a new callback BdrvChildRole.bdrv_drained_poll() and
uses it to make bdrv_drain_poll() consider block jobs using the node to
be drained.

For the test case to work as expected, we have to switch from
block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even
considered active and must be waited for when draining the node.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: d30b8e64b7b282da785307504ada59efa8096fb1
      
https://github.com/qemu/qemu/commit/d30b8e64b7b282da785307504ada59efa8096fb1
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/io.c

  Log Message:
  -----------
  block: Remove bdrv_drain_recurse()

For bdrv_drain(), recursively waiting for child node requests is
pointless because we didn't quiesce their parents, so new requests could
come in anyway. Letting the function work only on a single node makes it
more consistent.

For subtree drains and drain_all, we already have the recursion in
bdrv_do_drained_begin(), so the extra recursion doesn't add anything
either.

Remove the useless code.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Stefan Hajnoczi <address@hidden>


  Commit: 4c8158e359d194394c64acd21caf5e3f3f3141c2
      
https://github.com/qemu/qemu/commit/4c8158e359d194394c64acd21caf5e3f3f3141c2
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Add test for node deletion

This patch adds two bdrv-drain tests for what happens if some BDS goes
away during the drainage.

The basic idea is that you have a parent BDS with some child nodes.
Then, you drain one of the children.  Because of that, the party who
actually owns the parent decides to (A) delete it, or (B) detach all its
children from it -- both while the child is still being drained.

A real-world case where this can happen is the mirror block job, which
may exit if you drain one of its children.

Signed-off-by: Max Reitz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: fe4f0614ef9e361dae12012d3c400657444836cf
      
https://github.com/qemu/qemu/commit/fe4f0614ef9e361dae12012d3c400657444836cf
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block.c
    M block/io.c
    M include/block/block.h

  Log Message:
  -----------
  block: Drain recursively with a single BDRV_POLL_WHILE()

Anything can happen inside BDRV_POLL_WHILE(), including graph
changes that may interfere with its callers (e.g. child list iteration
in recursive callers of bdrv_do_drained_begin).

Switch to a single BDRV_POLL_WHILE() call for the whole subtree at the
end of bdrv_do_drained_begin() to avoid such effects. The recursion
happens now inside the loop condition. As the graph can only change
between bdrv_drain_poll() calls, but not inside of it, doing the
recursion here is safe.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: ebd31837618cdc7bda83090773dcdd87475d55b7
      
https://github.com/qemu/qemu/commit/ebd31837618cdc7bda83090773dcdd87475d55b7
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Test node deletion in subtree recursion

If bdrv_do_drained_begin() polls during its subtree recursion, the graph
can change and mess up the bs->children iteration. Test that this
doesn't happen.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: dcf94a23b1add0f856db51e9ff5ba0774e096076
      
https://github.com/qemu/qemu/commit/dcf94a23b1add0f856db51e9ff5ba0774e096076
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block.c
    M block/io.c
    M include/block/block.h

  Log Message:
  -----------
  block: Don't poll in parent drain callbacks

bdrv_do_drained_begin() is only safe if we have a single
BDRV_POLL_WHILE() after quiescing all affected nodes. We cannot allow
that parent callbacks introduce a nested polling loop that could cause
graph changes while we're traversing the graph.

Split off bdrv_do_drained_begin_quiesce(), which only quiesces a single
node without waiting for its requests to complete. These requests will
be waited for in the BDRV_POLL_WHILE() call down the call chain.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 231281ab42dad2b407b941e36ad11cbc6586e937
      
https://github.com/qemu/qemu/commit/231281ab42dad2b407b941e36ad11cbc6586e937
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Graph change through parent callback

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 0109e7e6f83ae5166b81bbd9a4319d60be49985a
      
https://github.com/qemu/qemu/commit/0109e7e6f83ae5166b81bbd9a4319d60be49985a
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/io.c

  Log Message:
  -----------
  block: Defer .bdrv_drain_begin callback to polling phase

We cannot allow aio_poll() in bdrv_drain_invoke(begin=true) until we're
done with propagating the drain through the graph and are doing the
single final BDRV_POLL_WHILE().

Just schedule the coroutine with the callback and increase bs->in_flight
to make sure that the polling phase will wait for it.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 57320ca961c2e8488e1884b4ebbcb929b6901dc6
      
https://github.com/qemu/qemu/commit/57320ca961c2e8488e1884b4ebbcb929b6901dc6
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Test that bdrv_drain_invoke() doesn't poll

This adds a test case that goes wrong if bdrv_drain_invoke() calls
aio_poll().

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 4d22bbf4ef72583eefdf44db6bf9fc7683fbc4c2
      
https://github.com/qemu/qemu/commit/4d22bbf4ef72583eefdf44db6bf9fc7683fbc4c2
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M include/block/aio-wait.h

  Log Message:
  -----------
  block: Allow AIO_WAIT_WHILE with NULL ctx

bdrv_drain_all() wants to have a single polling loop for draining the
in-flight requests of all nodes. This means that the AIO_WAIT_WHILE()
condition relies on activity in multiple AioContexts, which is polled
from the mainloop context. We must therefore call AIO_WAIT_WHILE() from
the mainloop thread and use the AioWait notification mechanism.

Just randomly picking the AioContext of any non-mainloop thread would
work, but instead of bothering to find such a context in the caller, we
can just as well accept NULL for ctx.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: c8ca33d06def97d909a8511377b82994ae4e5981
      
https://github.com/qemu/qemu/commit/c8ca33d06def97d909a8511377b82994ae4e5981
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/io.c

  Log Message:
  -----------
  block: Move bdrv_drain_all_begin() out of coroutine context

Before we can introduce a single polling loop for all nodes in
bdrv_drain_all_begin(), we must make sure to run it outside of coroutine
context like we already do for bdrv_do_drained_begin().

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 6cd5c9d7b2df93ef54144f170d4c908934a4767f
      
https://github.com/qemu/qemu/commit/6cd5c9d7b2df93ef54144f170d4c908934a4767f
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block.c
    M block/io.c
    M block/vvfat.c
    M include/block/block.h
    M include/block/block_int.h

  Log Message:
  -----------
  block: ignore_bds_parents parameter for drain functions

In the future, bdrv_drained_all_begin/end() will drain all invidiual
nodes separately rather than whole subtrees. This means that we don't
want to propagate the drain to all parents any more: If the parent is a
BDS, it will already be drained separately. Recursing to all parents is
unnecessary work and would make it an O(n²) operation.

Prepare the drain function for the changed drain_all by adding an
ignore_bds_parents parameter to the internal implementation that
prevents the propagation of the drain to BDS parents. We still (have to)
propagate it to non-BDS parents like BlockBackends or Jobs because those
are not drained separately.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 0f12264e7a41458179ad10276a7c33c72024861a
      
https://github.com/qemu/qemu/commit/0f12264e7a41458179ad10276a7c33c72024861a
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block.c
    M block/io.c
    M include/block/block.h
    M include/block/block_int.h

  Log Message:
  -----------
  block: Allow graph changes in bdrv_drain_all_begin/end sections

bdrv_drain_all_*() used bdrv_next() to iterate over all root nodes and
did a subtree drain for each of them. This works fine as long as the
graph is static, but sadly, reality looks different.

If the graph changes so that root nodes are added or removed, we would
have to compensate for this. bdrv_next() returns each root node only
once even if it's the root node for multiple BlockBackends or for a
monitor-owned block driver tree, which would only complicate things.

The much easier and more obviously correct way is to fundamentally
change the way the functions work: Iterate over all BlockDriverStates,
no matter who owns them, and drain them individually. Compensation is
only necessary when a new BDS is created inside a drain_all section.
Removal of a BDS doesn't require any action because it's gone afterwards
anyway.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 19f7a7e574a099dca13120441fbe723cea9c1dc2
      
https://github.com/qemu/qemu/commit/19f7a7e574a099dca13120441fbe723cea9c1dc2
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Test graph changes in drain_all section

This tests both adding and remove a node between bdrv_drain_all_begin()
and bdrv_drain_all_end(), and enabled the existing detach test for
drain_all.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: f45280cbf66d8e58224f6a253d0ae2aa72cc6280
      
https://github.com/qemu/qemu/commit/f45280cbf66d8e58224f6a253d0ae2aa72cc6280
  Author: Greg Kurz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/block-backend.c

  Log Message:
  -----------
  block: fix QEMU crash with scsi-hd and drive_del

Removing a drive with drive_del while it is being used to run an I/O
intensive workload can cause QEMU to crash.

An AIO flush can yield at some point:

blk_aio_flush_entry()
 blk_co_flush(blk)
  bdrv_co_flush(blk->root->bs)
   ...
    qemu_coroutine_yield()

and let the HMP command to run, free blk->root and give control
back to the AIO flush:

    hmp_drive_del()
     blk_remove_bs()
      bdrv_root_unref_child(blk->root)
       child_bs = blk->root->bs
       bdrv_detach_child(blk->root)
  bdrv_replace_child(blk->root, NULL)
   blk->root->bs = NULL
  g_free(blk->root) <============== blk->root becomes stale
       bdrv_unref(child_bs)
  bdrv_delete(child_bs)
   bdrv_close()
    bdrv_drained_begin()
     bdrv_do_drained_begin()
      bdrv_drain_recurse()
       aio_poll()
        ...
        qemu_coroutine_switch()

and the AIO flush completion ends up dereferencing blk->root:

  blk_aio_complete()
   scsi_aio_complete()
    blk_get_aio_context(blk)
     bs = blk_bs(blk)
 ie, bs = blk->root ? blk->root->bs : NULL
      ^^^^^
      stale

The problem is that we should avoid making block driver graph
changes while we have in-flight requests. Let's drain all I/O
for this BB before calling bdrv_root_unref_child().

Signed-off-by: Greg Kurz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 4295c5fc613a6dae55c804b689fdbbeb0c4af816
      
https://github.com/qemu/qemu/commit/4295c5fc613a6dae55c804b689fdbbeb0c4af816
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  block/mirror: Pull out mirror_perform()

When converting mirror's I/O to coroutines, we are going to need a point
where these coroutines are created.  mirror_perform() is going to be
that point.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Vladimir Sementsov-Ogievskiy <address@hidden>
Reviewed-by: Jeff Cody <address@hidden>
Reviewed-by: Alberto Garcia <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 2e1990b26e5aa1ba1a730aa6281df123bb7a71b6
      
https://github.com/qemu/qemu/commit/2e1990b26e5aa1ba1a730aa6281df123bb7a71b6
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  block/mirror: Convert to coroutines

In order to talk to the source BDS (and maybe in the future to the
target BDS as well) directly, we need to convert our existing AIO
requests into coroutine I/O requests.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 12aa40822daf0ab13059b27b29a83ded43bae3bb
      
https://github.com/qemu/qemu/commit/12aa40822daf0ab13059b27b29a83ded43bae3bb
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  block/mirror: Use CoQueue to wait on in-flight ops

Attach a CoQueue to each in-flight operation so if we need to wait for
any we can use it to wait instead of just blindly yielding and hoping
for some operation to wake us.

A later patch will use this infrastructure to allow requests accessing
the same area of the virtual disk to specifically wait for each other.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 1181e19a6d6986a08b889a32438d0ceeee9b2ef3
      
https://github.com/qemu/qemu/commit/1181e19a6d6986a08b889a32438d0ceeee9b2ef3
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  block/mirror: Wait for in-flight op conflicts

This patch makes the mirror code differentiate between simply waiting
for any operation to complete (mirror_wait_for_free_in_flight_slot())
and specifically waiting for all operations touching a certain range of
the virtual disk to complete (mirror_wait_on_conflicts()).

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 138f9fffb809451ef80f5be4647558b72f2339ad
      
https://github.com/qemu/qemu/commit/138f9fffb809451ef80f5be4647558b72f2339ad
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  block/mirror: Use source as a BdrvChild

With this, the mirror_top_bs is no longer just a technically required
node in the BDS graph but actually represents the block job operation.

Also, drop MirrorBlockJob.source, as we can reach it through
mirror_top_bs->backing.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Alberto Garcia <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: ec9f10fe064f2abb9dc60a9fa580d8d0933f2acf
      
https://github.com/qemu/qemu/commit/ec9f10fe064f2abb9dc60a9fa580d8d0933f2acf
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block.c

  Log Message:
  -----------
  block: Generalize should_update_child() rule

Currently, bdrv_replace_node() refuses to create loops from one BDS to
itself if the BDS to be replaced is the backing node of the BDS to
replace it: Say there is a node A and a node B.  Replacing B by A means
making all references to B point to A.  If B is a child of A (i.e. A has
a reference to B), that would mean we would have to make this reference
point to A itself -- so we'd create a loop.

bdrv_replace_node() (through should_update_child()) refuses to do so if
B is the backing node of A.  There is no reason why we should create
loops if B is not the backing node of A, though.  The BDS graph should
never contain loops, so we should always refuse to create them.

If B is a child of A and B is to be replaced by A, we should simply
leave B in place there because it is the most sensible choice.

A more specific argument would be: Putting filter drivers into the BDS
graph is basically the same as appending an overlay to a backing chain.
But the main child BDS of a filter driver is not "backing" but "file",
so restricting the no-loop rule to backing nodes would fail here.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Alberto Garcia <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: a33fbb4f8b64226becf502a123733776ce319b24
      
https://github.com/qemu/qemu/commit/a33fbb4f8b64226becf502a123733776ce319b24
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/backup.c
    M block/dirty-bitmap.c
    M include/qemu/hbitmap.h
    M tests/test-hbitmap.c
    M util/hbitmap.c

  Log Message:
  -----------
  hbitmap: Add @advance param to hbitmap_iter_next()

This new parameter allows the caller to just query the next dirty
position without moving the iterator.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: John Snow <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 269576848ec3d57d2d958cf5ac69b08c44adf816
      
https://github.com/qemu/qemu/commit/269576848ec3d57d2d958cf5ac69b08c44adf816
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M tests/test-hbitmap.c

  Log Message:
  -----------
  test-hbitmap: Add non-advancing iter_next tests

Add a function that wraps hbitmap_iter_next() and always calls it in
non-advancing mode first, and in advancing mode next.  The result should
always be the same.

By using this function everywhere we called hbitmap_iter_next() before,
we should get good test coverage for non-advancing hbitmap_iter_next().

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: John Snow <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 72d10a94213a954ad569095cb4491f2ae0853c40
      
https://github.com/qemu/qemu/commit/72d10a94213a954ad569095cb4491f2ae0853c40
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/dirty-bitmap.c
    M include/block/dirty-bitmap.h

  Log Message:
  -----------
  block/dirty-bitmap: Add bdrv_dirty_iter_next_area

This new function allows to look for a consecutively dirty area in a
dirty bitmap.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: John Snow <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 429076e88dec48ce22a6fb3ba11e5ccb6134f62d
      
https://github.com/qemu/qemu/commit/429076e88dec48ce22a6fb3ba11e5ccb6134f62d
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  block/mirror: Add MirrorBDSOpaque

This will allow us to access the block job data when the mirror block
driver becomes more complex.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 62f13600593322b8e796f15fd6742064fba6ab65
      
https://github.com/qemu/qemu/commit/62f13600593322b8e796f15fd6742064fba6ab65
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M include/qemu/job.h
    M job.c

  Log Message:
  -----------
  job: Add job_progress_increase_remaining()

Signed-off-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Kevin Wolf <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: d06107ade0ce74dc39739bac80de84b51ec18546
      
https://github.com/qemu/qemu/commit/d06107ade0ce74dc39739bac80de84b51ec18546
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/mirror.c
    M qapi/block-core.json

  Log Message:
  -----------
  block/mirror: Add active mirroring

This patch implements active synchronous mirroring.  In active mode, the
passive mechanism will still be in place and is used to copy all
initially dirty clusters off the source disk; but every write request
will write data both to the source and the target disk, so the source
cannot be dirtied faster than data is mirrored to the target.  Also,
once the block job has converged (BLOCK_JOB_READY sent), source and
target are guaranteed to stay in sync (unless an error occurs).

Active mode is completely optional and currently disabled at runtime.  A
later patch will add a way for users to enable it.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 481debaa3270fb276dcf27205aa27ad52cc34590
      
https://github.com/qemu/qemu/commit/481debaa3270fb276dcf27205aa27ad52cc34590
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block/mirror.c
    M blockdev.c
    M include/block/block_int.h
    M qapi/block-core.json

  Log Message:
  -----------
  block/mirror: Add copy mode QAPI interface

This patch allows the user to specify whether to use active or only
background mode for mirror block jobs.  Currently, this setting will
remain constant for the duration of the entire block job.

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Alberto Garcia <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: e38da02091eeed56bb370ec9d72c4367d4e9ada3
      
https://github.com/qemu/qemu/commit/e38da02091eeed56bb370ec9d72c4367d4e9ada3
  Author: Max Reitz <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    A tests/qemu-iotests/151
    A tests/qemu-iotests/151.out
    M tests/qemu-iotests/group

  Log Message:
  -----------
  iotests: Add test for active mirroring

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Alberto Garcia <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 4c790afe2503eab12874508acab5b388d7babfd2
      
https://github.com/qemu/qemu/commit/4c790afe2503eab12874508acab5b388d7babfd2
  Author: Kevin Wolf <address@hidden>
  Date:   2018-06-18 (Mon, 18 Jun 2018)

  Changed paths:
    M block.c
    M block/backup.c
    M block/dirty-bitmap.c
    M block/mirror.c
    M blockdev.c
    M include/block/block_int.h
    M include/block/dirty-bitmap.h
    M include/qemu/hbitmap.h
    M include/qemu/job.h
    M job.c
    M qapi/block-core.json
    A tests/qemu-iotests/151
    A tests/qemu-iotests/151.out
    M tests/qemu-iotests/group
    M tests/test-hbitmap.c
    M util/hbitmap.c

  Log Message:
  -----------
  Merge remote-tracking branch 'mreitz/tags/pull-block-2018-06-18' into 
queue-block

Block patches:
- Active mirror (blockdev-mirror copy-mode=write-blocking)

# gpg: Signature made Mon Jun 18 17:08:19 2018 CEST
# gpg:                using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <address@hidden>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1  1829 F407 DB00 61D5 CF40

* mreitz/tags/pull-block-2018-06-18:
  iotests: Add test for active mirroring
  block/mirror: Add copy mode QAPI interface
  block/mirror: Add active mirroring
  job: Add job_progress_increase_remaining()
  block/mirror: Add MirrorBDSOpaque
  block/dirty-bitmap: Add bdrv_dirty_iter_next_area
  test-hbitmap: Add non-advancing iter_next tests
  hbitmap: Add @advance param to hbitmap_iter_next()
  block: Generalize should_update_child() rule
  block/mirror: Use source as a BdrvChild
  block/mirror: Wait for in-flight op conflicts
  block/mirror: Use CoQueue to wait on in-flight ops
  block/mirror: Convert to coroutines
  block/mirror: Pull out mirror_perform()

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 0f01b9fdd4ba0a3d38e26e89e1b1faf1213eb4f1
      
https://github.com/qemu/qemu/commit/0f01b9fdd4ba0a3d38e26e89e1b1faf1213eb4f1
  Author: Peter Maydell <address@hidden>
  Date:   2018-06-19 (Tue, 19 Jun 2018)

  Changed paths:
    M block.c
    M block/backup.c
    M block/block-backend.c
    M block/dirty-bitmap.c
    M block/io.c
    M block/mirror.c
    M block/vvfat.c
    M blockdev.c
    M blockjob.c
    M include/block/aio-wait.h
    M include/block/block.h
    M include/block/block_int.h
    M include/block/blockjob_int.h
    M include/block/dirty-bitmap.h
    M include/qemu/hbitmap.h
    M include/qemu/job.h
    M job.c
    M qapi/block-core.json
    A tests/qemu-iotests/151
    A tests/qemu-iotests/151.out
    M tests/qemu-iotests/group
    M tests/test-bdrv-drain.c
    M tests/test-hbitmap.c
    M util/hbitmap.c

  Log Message:
  -----------
  Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging

Block layer patches:

- Active mirror (blockdev-mirror copy-mode=write-blocking)
- bdrv_drain_*() fixes and test cases
- Fix crash with scsi-hd and drive_del

# gpg: Signature made Mon 18 Jun 2018 17:44:10 BST
# gpg:                using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <address@hidden>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74  56FE 7F09 B272 C88F 2FD6

* remotes/kevin/tags/for-upstream: (35 commits)
  iotests: Add test for active mirroring
  block/mirror: Add copy mode QAPI interface
  block/mirror: Add active mirroring
  job: Add job_progress_increase_remaining()
  block/mirror: Add MirrorBDSOpaque
  block/dirty-bitmap: Add bdrv_dirty_iter_next_area
  test-hbitmap: Add non-advancing iter_next tests
  hbitmap: Add @advance param to hbitmap_iter_next()
  block: Generalize should_update_child() rule
  block/mirror: Use source as a BdrvChild
  block/mirror: Wait for in-flight op conflicts
  block/mirror: Use CoQueue to wait on in-flight ops
  block/mirror: Convert to coroutines
  block/mirror: Pull out mirror_perform()
  block: fix QEMU crash with scsi-hd and drive_del
  test-bdrv-drain: Test graph changes in drain_all section
  block: Allow graph changes in bdrv_drain_all_begin/end sections
  block: ignore_bds_parents parameter for drain functions
  block: Move bdrv_drain_all_begin() out of coroutine context
  block: Allow AIO_WAIT_WHILE with NULL ctx
  ...

Signed-off-by: Peter Maydell <address@hidden>


Compare: https://github.com/qemu/qemu/compare/a01fba4687b9...0f01b9fdd4ba
      **NOTE:** This service been marked for deprecation: 
https://developer.github.com/changes/2018-04-25-github-services-deprecation/

      Functionality will be removed from GitHub.com on January 31st, 2019.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]