qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] 536078: block/commit: add block job creation


From: GitHub
Subject: [Qemu-commits] [qemu/qemu] 536078: block/commit: add block job creation flags
Date: Tue, 25 Sep 2018 10:09:10 -0700

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: 5360782d0827854383097d560715d8d8027ee590
      
https://github.com/qemu/qemu/commit/5360782d0827854383097d560715d8d8027ee590
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/commit.c
    M blockdev.c
    M include/block/block_int.h

  Log Message:
  -----------
  block/commit: add block job creation flags

Add support for taking and passing forward job creation flags.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Reviewed-by: Jeff Cody <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: a1999b33488daba68a1bcd7c6fdf314ddeacc6a2
      
https://github.com/qemu/qemu/commit/a1999b33488daba68a1bcd7c6fdf314ddeacc6a2
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/mirror.c
    M blockdev.c
    M include/block/block_int.h

  Log Message:
  -----------
  block/mirror: add block job creation flags

Add support for taking and passing forward job creation flags.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Reviewed-by: Jeff Cody <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: cf6320df581e6cbde6a95075266859a8f9ba9d55
      
https://github.com/qemu/qemu/commit/cf6320df581e6cbde6a95075266859a8f9ba9d55
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/stream.c
    M blockdev.c
    M include/block/block_int.h

  Log Message:
  -----------
  block/stream: add block job creation flags

Add support for taking and passing forward job creation flags.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Reviewed-by: Jeff Cody <address@hidden>
Message-id: address@hidden
Signed-off-by: Max Reitz <address@hidden>


  Commit: 22dffcbec62ba918db690ed44beba4bd4e970bb9
      
https://github.com/qemu/qemu/commit/22dffcbec62ba918db690ed44beba4bd4e970bb9
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/commit.c

  Log Message:
  -----------
  block/commit: refactor commit to use job callbacks

Use the component callbacks; prepare, abort, and clean.

NB: prepare is only called when the job has not yet failed;
and abort can be called after prepare.

complete -> prepare -> abort -> clean
complete -> abort -> clean

During refactor, a potential problem with bdrv_drop_intermediate
was identified, the patched behavior is no worse than the pre-patch
behavior, so leave a FIXME for now to be fixed in a future patch.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: c2924ceaa7f1866148e2847c969fc1902a2524fa
      
https://github.com/qemu/qemu/commit/c2924ceaa7f1866148e2847c969fc1902a2524fa
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  block/mirror: don't install backing chain on abort

In cases where we abort the block/mirror job, there's no point in
installing the new backing chain before we finish aborting.

Signed-off-by: John Snow <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: 737efc1eda23b904fbe0e66b37715fb0e5c3e58b
      
https://github.com/qemu/qemu/commit/737efc1eda23b904fbe0e66b37715fb0e5c3e58b
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  block/mirror: conservative mirror_exit refactor

For purposes of minimum code movement, refactor the mirror_exit
callback to use the post-finalization callbacks in a trivial way.

Signed-off-by: John Snow <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
[mreitz: Added comment for the mirror_exit() function]
Signed-off-by: Max Reitz <address@hidden>


  Commit: 1b57488acf1beba157bcd8c926e596342bcb5c60
      
https://github.com/qemu/qemu/commit/1b57488acf1beba157bcd8c926e596342bcb5c60
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/stream.c

  Log Message:
  -----------
  block/stream: refactor stream to use job callbacks

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: 0cc4643b01a0138543e886db8e3bf8a3f74ff8f9
      
https://github.com/qemu/qemu/commit/0cc4643b01a0138543e886db8e3bf8a3f74ff8f9
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-blockjob.c

  Log Message:
  -----------
  tests/blockjob: replace Blockjob with Job

These tests don't actually test blockjobs anymore, they test
generic Job lifetimes. Change the types accordingly.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: 977d26fdbeb35d8d2d0f203f9556d44a353e0dfd
      
https://github.com/qemu/qemu/commit/977d26fdbeb35d8d2d0f203f9556d44a353e0dfd
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-blockjob.c

  Log Message:
  -----------
  tests/test-blockjob: remove exit callback

We remove the exit callback and the completed boolean along with it.
We can simulate it just fine by waiting for the job to defer to the
main loop, and then giving it one final kick to get the main loop
portion to run.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: e4dad4275d51b594c8abbe726a4927f6f388e427
      
https://github.com/qemu/qemu/commit/e4dad4275d51b594c8abbe726a4927f6f388e427
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-blockjob-txn.c

  Log Message:
  -----------
  tests/test-blockjob-txn: move .exit to .clean

The exit callback in this test actually only performs cleanup.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: ccbfb3319aa265e71c16dac976ff857d0a5bcb4b
      
https://github.com/qemu/qemu/commit/ccbfb3319aa265e71c16dac976ff857d0a5bcb4b
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M include/qemu/job.h
    M job.c

  Log Message:
  -----------
  jobs: remove .exit callback

Now that all of the jobs use the component finalization callbacks,
there's no use for the heavy-hammer .exit callback anymore.

job_exit becomes a glorified type shim so that we can call
job_completed from aio_bh_schedule_oneshot.

Move these three functions down into job.c to eliminate a
forward reference.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: 96fbf5345f60a87fab8e7ea79a2406f381027db9
      
https://github.com/qemu/qemu/commit/96fbf5345f60a87fab8e7ea79a2406f381027db9
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M blockdev.c
    M qapi/block-core.json

  Log Message:
  -----------
  qapi/block-commit: expose new job properties

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: a6b58adec28ff43c0f29ff7c95cdd5d11e87cf61
      
https://github.com/qemu/qemu/commit/a6b58adec28ff43c0f29ff7c95cdd5d11e87cf61
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M blockdev.c
    M qapi/block-core.json

  Log Message:
  -----------
  qapi/block-mirror: expose new job properties

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: 241ca1ab78542f02e666636e0323bcfe3cb1d5e8
      
https://github.com/qemu/qemu/commit/241ca1ab78542f02e666636e0323bcfe3cb1d5e8
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M blockdev.c
    M hmp.c
    M qapi/block-core.json

  Log Message:
  -----------
  qapi/block-stream: expose new job properties

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: dfaff2c37dfa52ab045cf87503e60ea56317230a
      
https://github.com/qemu/qemu/commit/dfaff2c37dfa52ab045cf87503e60ea56317230a
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M qapi/block-core.json

  Log Message:
  -----------
  block/backup: qapi documentation fixup

Fix documentation to match the other jobs amended for 3.1.

Signed-off-by: John Snow <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: 66da04ddd3dcb8c61ee664b6faced132da002006
      
https://github.com/qemu/qemu/commit/66da04ddd3dcb8c61ee664b6faced132da002006
  Author: John Snow <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: document transactional shortcomings

Presently only the backup job really guarantees what one would consider
transactional semantics. To guard against someone helpfully adding them
in the future, document that there are shortcomings in the model that
would need to be audited at that time.

Signed-off-by: John Snow <address@hidden>
Message-id: address@hidden
Reviewed-by: Jeff Cody <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Signed-off-by: Max Reitz <address@hidden>


  Commit: 3c605f4074ebeb97970eb660fb56a9cb06525923
      
https://github.com/qemu/qemu/commit/3c605f4074ebeb97970eb660fb56a9cb06525923
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M blockdev.c
    M qapi/block-core.json

  Log Message:
  -----------
  commit: Add top-node/base-node options

The block-commit QMP command required specifying the top and base nodes
of the commit jobs using the file name of that node. While this works
in simple cases (local files with absolute paths), the file names
generated for more complicated setups can be hard to predict.

The block-commit command has more problems than just this, so we want to
replace it altogether in the long run, but libvirt needs a reliable way
to address nodes now. So we don't want to wait for a new, cleaner
command, but just add the minimal thing needed right now.

This adds two new options top-node and base-node to the command, which
allow specifying node names instead. They are mutually exclusive with
the old options.

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: d57177a48fc604e5427921bf20b22ee0e6d578b3
      
https://github.com/qemu/qemu/commit/d57177a48fc604e5427921bf20b22ee0e6d578b3
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/qemu-iotests/040
    M tests/qemu-iotests/040.out

  Log Message:
  -----------
  qemu-iotests: Test commit with top-node/base-node

This adds some tests for block-commit with the new options top-node and
base-node (taking node names) instead of top and base (taking file
names).

Signed-off-by: Kevin Wolf <address@hidden>


  Commit: e091f0e905a4481f347913420f327d427f18d9d4
      
https://github.com/qemu/qemu/commit/e091f0e905a4481f347913420f327d427f18d9d4
  Author: Sergio Lopez <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/linux-aio.c

  Log Message:
  -----------
  block/linux-aio: acquire AioContext before qemu_laio_process_completions

In qemu_laio_process_completions_and_submit, the AioContext is acquired
before the ioq_submit iteration and after qemu_laio_process_completions,
but the latter is not thread safe either.

This change avoids a number of random crashes when the Main Thread and
an IO Thread collide processing completions for the same AioContext.
This is an example of such crash:

 - The IO Thread is trying to acquire the AioContext at aio_co_enter,
   which evidences that it didn't lock it before:

Thread 3 (Thread 0x7fdfd8bd8700 (LWP 36743)):
 #0  0x00007fdfe0dd542d in __lll_lock_wait () at 
../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
 #1  0x00007fdfe0dd0de6 in _L_lock_870 () at /lib64/libpthread.so.0
 #2  0x00007fdfe0dd0cdf in __GI___pthread_mutex_lock (address@hidden)
    at ../nptl/pthread_mutex_lock.c:114
 #3  0x00005631fc0603a7 in qemu_mutex_lock_impl (mutex=0x5631fde0e6c0, 
file=0x5631fc23520f "util/async.c", line=511) at util/qemu-thread-posix.c:66
 #4  0x00005631fc05b558 in aio_co_enter (ctx=0x5631fde0e660, co=0x7fdfcc0c2b40) 
at util/async.c:493
 #5  0x00005631fc05b5ac in aio_co_wake (co=<optimized out>) at util/async.c:478
 #6  0x00005631fbfc51ad in qemu_laio_process_completion (laiocb=<optimized 
out>) at block/linux-aio.c:104
 #7  0x00005631fbfc523c in qemu_laio_process_completions (address@hidden)
    at block/linux-aio.c:222
 #8  0x00005631fbfc5499 in qemu_laio_process_completions_and_submit 
(s=0x7fdfc0297670)
    at block/linux-aio.c:237
 #9  0x00005631fc05d978 in aio_dispatch_handlers (address@hidden) at 
util/aio-posix.c:406
 #10 0x00005631fc05e3ea in aio_poll (ctx=0x5631fde0e660, address@hidden)
    at util/aio-posix.c:693
 #11 0x00005631fbd7ad96 in iothread_run (opaque=0x5631fde0e1c0) at iothread.c:64
 #12 0x00007fdfe0dcee25 in start_thread (arg=0x7fdfd8bd8700) at 
pthread_create.c:308
 #13 0x00007fdfe0afc34d in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:113

 - The Main Thread is also processing completions from the same
   AioContext, and crashes due to failed assertion at util/iov.c:78:

Thread 1 (Thread 0x7fdfeb5eac80 (LWP 36740)):
 #0  0x00007fdfe0a391f7 in __GI_raise (address@hidden) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
 #1  0x00007fdfe0a3a8e8 in __GI_abort () at abort.c:90
 #2  0x00007fdfe0a32266 in __assert_fail_base (fmt=0x7fdfe0b84e68 "%s%s%s:%u: 
%s%sAssertion `%s' failed.\n%n", address@hidden "offset == 0", address@hidden 
"util/iov.c", address@hidden, address@hidden <__PRETTY_FUNCTION__.15220> 
"iov_memset")
    at assert.c:92
 #3  0x00007fdfe0a32312 in __GI___assert_fail (address@hidden "offset == 0", 
address@hidden "util/iov.c", address@hidden, address@hidden 
<__PRETTY_FUNCTION__.15220> "iov_memset") at assert.c:101
 #4  0x00005631fc065287 in iov_memset (iov=<optimized out>, iov_cnt=<optimized 
out>, offset=<optimized out>, address@hidden, address@hidden, 
bytes=15515191315812405248) at util/iov.c:78
 #5  0x00005631fc065a63 in qemu_iovec_memset (qiov=<optimized out>, 
address@hidden, address@hidden, bytes=<optimized out>) at util/iov.c:410
 #6  0x00005631fbfc5178 in qemu_laio_process_completion (laiocb=0x7fdd920df630) 
at block/linux-aio.c:88
 #7  0x00005631fbfc523c in qemu_laio_process_completions (address@hidden)
    at block/linux-aio.c:222
 #8  0x00005631fbfc5499 in qemu_laio_process_completions_and_submit 
(s=0x7fdfc0297670)
    at block/linux-aio.c:237
 #9  0x00005631fbfc54ed in qemu_laio_poll_cb (opaque=<optimized out>) at 
block/linux-aio.c:272
 #10 0x00005631fc05d85e in run_poll_handlers_once (address@hidden) at 
util/aio-posix.c:497
 #11 0x00005631fc05e2ca in aio_poll (blocking=false, ctx=0x5631fde0e660) at 
util/aio-posix.c:574
 #12 0x00005631fc05e2ca in aio_poll (ctx=0x5631fde0e660, address@hidden)
    at util/aio-posix.c:604
 #13 0x00005631fbfcb8a3 in bdrv_do_drained_begin (ignore_parent=<optimized 
out>, recursive=<optimized out>, bs=<optimized out>) at block/io.c:273
 #14 0x00005631fbfcb8a3 in bdrv_do_drained_begin (bs=0x5631fe8b6200, 
recursive=<optimized out>, parent=0x0, ignore_bds_parents=<optimized out>, 
poll=<optimized out>) at block/io.c:390
 #15 0x00005631fbfbcd2e in blk_drain (blk=0x5631fe83ac80) at 
block/block-backend.c:1590
 #16 0x00005631fbfbe138 in blk_remove_bs (address@hidden) at 
block/block-backend.c:774
 #17 0x00005631fbfbe3d6 in blk_unref (blk=0x5631fe83ac80) at 
block/block-backend.c:401
 #18 0x00005631fbfbe3d6 in blk_unref (blk=0x5631fe83ac80) at 
block/block-backend.c:449
 #19 0x00005631fbfc9a69 in commit_complete (job=0x5631fe8b94b0, 
opaque=0x7fdfcc1bb080)
    at block/commit.c:92
 #20 0x00005631fbf7d662 in job_defer_to_main_loop_bh (opaque=0x7fdfcc1b4560) at 
job.c:973
 #21 0x00005631fc05ad41 in aio_bh_poll (bh=0x7fdfcc01ad90) at util/async.c:90
 #22 0x00005631fc05ad41 in aio_bh_poll (address@hidden) at util/async.c:118
 #23 0x00005631fc05e210 in aio_dispatch (ctx=0x5631fddffdb0) at 
util/aio-posix.c:436
 #24 0x00005631fc05ac1e in aio_ctx_dispatch (source=<optimized out>, 
callback=<optimized out>, user_data=<optimized out>) at util/async.c:261
 #25 0x00007fdfeaae44c9 in g_main_context_dispatch (context=0x5631fde00140) at 
gmain.c:3201
 #26 0x00007fdfeaae44c9 in g_main_context_dispatch (address@hidden) at 
gmain.c:3854
 #27 0x00005631fc05d503 in main_loop_wait () at util/main-loop.c:215
 #28 0x00005631fc05d503 in main_loop_wait (timeout=<optimized out>) at 
util/main-loop.c:238
 #29 0x00005631fc05d503 in main_loop_wait (address@hidden) at 
util/main-loop.c:497
 #30 0x00005631fbd81412 in main_loop () at vl.c:1866
 #31 0x00005631fbc18ff3 in main (argc=<optimized out>, argv=<optimized out>, 
envp=<optimized out>)
    at vl.c:4647

 - A closer examination shows that s->io_q.in_flight appears to have
   gone backwards:

(gdb) frame 7
 #7  0x00005631fbfc523c in qemu_laio_process_completions (address@hidden)
    at block/linux-aio.c:222
222                 qemu_laio_process_completion(laiocb);
(gdb) p s
$2 = (LinuxAioState *) 0x7fdfc0297670
(gdb) p *s
$3 = {aio_context = 0x5631fde0e660, ctx = 0x7fdfeb43b000, e = {rfd = 33, wfd = 
33}, io_q = {plugged = 0,
    in_queue = 0, in_flight = 4294967280, blocked = false, pending = {sqh_first 
= 0x0,
      sqh_last = 0x7fdfc0297698}}, completion_bh = 0x7fdfc0280ef0, event_idx = 
21, event_max = 241}
(gdb) p/x s->io_q.in_flight
$4 = 0xfffffff0

Signed-off-by: Sergio Lopez <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 8961be33e8ca7e809c603223803ea66ef7ea5be7
      
https://github.com/qemu/qemu/commit/8961be33e8ca7e809c603223803ea66ef7ea5be7
  Author: Alberto Garcia <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block.c

  Log Message:
  -----------
  block: Fix use after free error in bdrv_open_inherit()

When a block device is opened with BDRV_O_SNAPSHOT and the
bdrv_append_temp_snapshot() call fails then the error code path tries
to unref the already destroyed 'options' QDict.

This can be reproduced easily by setting TMPDIR to a location where
the QEMU process can't write:

   $ TMPDIR=/nonexistent $QEMU -drive driver=null-co,snapshot=on

Signed-off-by: Alberto Garcia <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 6a7014ef22ad3cf9110ca0e178f73a67a6483e00
      
https://github.com/qemu/qemu/commit/6a7014ef22ad3cf9110ca0e178f73a67a6483e00
  Author: Alberto Garcia <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/qemu-iotests/051
    M tests/qemu-iotests/051.out
    M tests/qemu-iotests/051.pc.out

  Log Message:
  -----------
  qemu-iotests: Test snapshot=on with nonexistent TMPDIR

We just fixed a bug that was causing a use-after-free when QEMU was
unable to create a temporary snapshot. This is a test case for this
scenario.

Signed-off-by: Alberto Garcia <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 6808ae0417131f8dbe7b051256dff7a16634dc1d
      
https://github.com/qemu/qemu/commit/6808ae0417131f8dbe7b051256dff7a16634dc1d
  Author: Sergio Lopez <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M util/async.c

  Log Message:
  -----------
  util/async: use qemu_aio_coroutine_enter in co_schedule_bh_cb

AIO Coroutines shouldn't by managed by an AioContext different than the
one assigned when they are created. aio_co_enter avoids entering a
coroutine from a different AioContext, calling aio_co_schedule instead.

Scheduled coroutines are then entered by co_schedule_bh_cb using
qemu_coroutine_enter, which just calls qemu_aio_coroutine_enter with the
current AioContext obtained with qemu_get_current_aio_context.
Eventually, co->ctx will be set to the AioContext passed as an argument
to qemu_aio_coroutine_enter.

This means that, if an IO Thread's AioConext is being processed by the
Main Thread (due to aio_poll being called with a BDS AioContext, as it
happens in AIO_WAIT_WHILE among other places), the AioContext from some
coroutines may be wrongly replaced with the one from the Main Thread.

This is the root cause behind some crashes, mainly triggered by the
drain code at block/io.c. The most common are these abort and failed
assertion:

util/async.c:aio_co_schedule
456     if (scheduled) {
457         fprintf(stderr,
458                 "%s: Co-routine was already scheduled in '%s'\n",
459                 __func__, scheduled);
460         abort();
461     }

util/qemu-coroutine-lock.c:
286     assert(mutex->holder == self);

But it's also known to cause random errors at different locations, and
even SIGSEGV with broken coroutine backtraces.

By using qemu_aio_coroutine_enter directly in co_schedule_bh_cb, we can
pass the correct AioContext as an argument, making sure co->ctx is not
wrongly altered.

Signed-off-by: Sergio Lopez <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 49880165a44f26dc84651858750facdee31f2513
      
https://github.com/qemu/qemu/commit/49880165a44f26dc84651858750facdee31f2513
  Author: Fam Zheng <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M job.c

  Log Message:
  -----------
  job: Fix nested aio_poll() hanging in job_txn_apply

All callers have acquired ctx already. Doing that again results in
aio_poll() hang. This fixes the problem that a BDRV_POLL_WHILE() in the
callback cannot make progress because ctx is recursively locked, for
example, when drive-backup finishes.

There are two callers of job_finalize():

    address@hidden:~/work/qemu [master]$ git grep -w -A1 '^\s*job_finalize'
    blockdev.c:    job_finalize(&job->job, errp);
    blockdev.c-    aio_context_release(aio_context);
    --
    job-qmp.c:    job_finalize(job, errp);
    job-qmp.c-    aio_context_release(aio_context);
    --
    tests/test-blockjob.c:    job_finalize(&job->job, &error_abort);
    tests/test-blockjob.c-    assert(job->job.status == JOB_STATUS_CONCLUDED);

Ignoring the test, it's easy to see both callers to job_finalize (and
job_do_finalize) have acquired the context.

Cc: address@hidden
Reported-by: Gu Nini <address@hidden>
Reviewed-by: Eric Blake <address@hidden>
Signed-off-by: Fam Zheng <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: d1756c780b7879fb64e41135feac781d84a1f995
      
https://github.com/qemu/qemu/commit/d1756c780b7879fb64e41135feac781d84a1f995
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M job.c

  Log Message:
  -----------
  job: Fix missing locking due to mismerge

job_completed() had a problem with double locking that was recently
fixed independently by two different commits:

"job: Fix nested aio_poll() hanging in job_txn_apply"
"jobs: add exit shim"

One fix removed the first aio_context_acquire(), the other fix removed
the other one. Now we have a bug again and the code is run without any
locking.

Add it back in one of the places.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Reviewed-by: John Snow <address@hidden>


  Commit: 34dc97b9a0e592bc466bdb0bbfe45d77304a72b6
      
https://github.com/qemu/qemu/commit/34dc97b9a0e592bc466bdb0bbfe45d77304a72b6
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M blockjob.c
    M include/block/blockjob.h
    M include/qemu/job.h
    M job.c

  Log Message:
  -----------
  blockjob: Wake up BDS when job becomes idle

In the context of draining a BDS, the .drained_poll callback of block
jobs is called. If this returns true (i.e. there is still some activity
pending), the drain operation may call aio_poll() with blocking=true to
wait for completion.

As soon as the pending activity is completed and the job finally arrives
in a quiescent state (i.e. its coroutine either yields with busy=false
or terminates), the block job must notify the aio_poll() loop to wake
up, otherwise we get a deadlock if both are running in different
threads.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 486574483aba988c83b20e7d3f1ccd50c4c333d8
      
https://github.com/qemu/qemu/commit/486574483aba988c83b20e7d3f1ccd50c4c333d8
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M include/block/aio-wait.h

  Log Message:
  -----------
  aio-wait: Increase num_waiters even in home thread

Even if AIO_WAIT_WHILE() is called in the home context of the
AioContext, we still want to allow the condition to change depending on
other threads as long as they kick the AioWait. Specfically block jobs
can be running in an I/O thread and should then be able to kick a drain
in the main loop context.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>


  Commit: f62c172959cd2b6de4dd8ba782e855d64d94764b
      
https://github.com/qemu/qemu/commit/f62c172959cd2b6de4dd8ba782e855d64d94764b
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Drain with block jobs in an I/O thread

This extends the existing drain test with a block job to include
variants where the block job runs in a different AioContext.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>


  Commit: 30c070a547322a5e41ce129d540bca3653b1a9c8
      
https://github.com/qemu/qemu/commit/30c070a547322a5e41ce129d540bca3653b1a9c8
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M include/qemu/job.h
    M tests/test-blockjob.c

  Log Message:
  -----------
  test-blockjob: Acquire AioContext around job_cancel_sync()

All callers in QEMU proper hold the AioContext lock when calling
job_finish_sync(). test-blockjob should do the same when it calls the
function indirectly through job_cancel_sync().

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>


  Commit: de0fbe64806321fc3e6399bfab360553db87a41d
      
https://github.com/qemu/qemu/commit/de0fbe64806321fc3e6399bfab360553db87a41d
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M job.c

  Log Message:
  -----------
  job: Use AIO_WAIT_WHILE() in job_finish_sync()

job_finish_sync() needs to release the AioContext lock of the job before
calling aio_poll(). Otherwise, callbacks called by aio_poll() would
possibly take the lock a second time and run into a deadlock with a
nested AIO_WAIT_WHILE() call.

Also, job_drain() without aio_poll() isn't necessarily enough to make
progress on a job, it could depend on bottom halves to be executed.

Combine both open-coded while loops into a single AIO_WAIT_WHILE() call
that solves both of these problems.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: ae23dde9dd486e57e152a0ebc9802caddedc45fc
      
https://github.com/qemu/qemu/commit/ae23dde9dd486e57e152a0ebc9802caddedc45fc
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Test AIO_WAIT_WHILE() in completion callback

This is a regression test for a deadlock that occurred in block job
completion callbacks (via job_defer_to_main_loop) because the AioContext
lock was taken twice: once in job_finish_sync() and then again in
job_defer_to_main_loop_bh(). This would cause AIO_WAIT_WHILE() to hang.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>


  Commit: aa1361d54aac43094b98024b8b6c804eb6e41661
      
https://github.com/qemu/qemu/commit/aa1361d54aac43094b98024b8b6c804eb6e41661
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/io.c
    M include/qemu/coroutine.h
    M util/qemu-coroutine.c

  Log Message:
  -----------
  block: Add missing locking in bdrv_co_drain_bh_cb()

bdrv_do_drained_begin/end() assume that they are called with the
AioContext lock of bs held. If we call drain functions from a coroutine
with the AioContext lock held, we yield and schedule a BH to move out of
coroutine context. This means that the lock for the home context of the
coroutine is released and must be re-acquired in the bottom half.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: fe5258a503a87e69be37c9ac48799e293809386e
      
https://github.com/qemu/qemu/commit/fe5258a503a87e69be37c9ac48799e293809386e
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/block-backend.c

  Log Message:
  -----------
  block-backend: Add .drained_poll callback

A bdrv_drain operation must ensure that all parents are quiesced, this
includes BlockBackends. Otherwise, callbacks called by requests that are
completed on the BDS layer, but not quite yet on the BlockBackend layer
could still create new requests.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 5ca9d21bd1c8eeb578d0964e31bd03d47c25773d
      
https://github.com/qemu/qemu/commit/5ca9d21bd1c8eeb578d0964e31bd03d47c25773d
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/block-backend.c

  Log Message:
  -----------
  block-backend: Fix potential double blk_delete()

blk_unref() first decreases the refcount of the BlockBackend and calls
blk_delete() if the refcount reaches zero. Requests can still be in
flight at this point, they are only drained during blk_delete():

At this point, arbitrary callbacks can run. If any callback takes a
temporary BlockBackend reference, it will first increase the refcount to
1 and then decrease it to 0 again, triggering another blk_delete(). This
will cause a use-after-free crash in the outer blk_delete().

Fix it by draining the BlockBackend before decreasing to refcount to 0.
Assert in blk_ref() that it never takes the first refcount (which would
mean that the BlockBackend is already being deleted).

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 46aaf2a566e364a62315219255099cbf1c9b990d
      
https://github.com/qemu/qemu/commit/46aaf2a566e364a62315219255099cbf1c9b990d
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/block-backend.c

  Log Message:
  -----------
  block-backend: Decrease in_flight only after callback

Request callbacks can do pretty much anything, including operations that
will yield from the coroutine (such as draining the backend). In that
case, a decreased in_flight would be visible to other code and could
lead to a drain completing while the callback hasn't actually completed
yet.

Note that reordering these operations forbids calling drain directly
inside an AIO callback. As Paolo explains, indirectly calling it is
okay:

- Calling it through a coroutine is okay, because then
  bdrv_drained_begin() goes through bdrv_co_yield_to_drain() and you
  have in_flight=2 when bdrv_co_yield_to_drain() yields, then soon
  in_flight=1 when the aio_co_wake() in the AIO callback completes, then
  in_flight=0 after the bottom half starts.

- Calling it through a bottom half would be okay too, as long as the AIO
  callback remembers to do inc_in_flight/dec_in_flight just like
  bdrv_co_yield_to_drain() and bdrv_co_drain_bh_cb() do

A few more important cases that come to mind:

- A coroutine that yields because of I/O is okay, with a sequence
  similar to bdrv_co_yield_to_drain().

- A coroutine that yields with no I/O pending will correctly decrease
  in_flight to zero before yielding.

- Calling more AIO from the callback won't overflow the counter just
  because of mutual recursion, because AIO functions always yield at
  least once before invoking the callback.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Reviewed-by: Paolo Bonzini <address@hidden>


  Commit: b5a7a0573530698ee448b063ac01d485e30446bd
      
https://github.com/qemu/qemu/commit/b5a7a0573530698ee448b063ac01d485e30446bd
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M blockjob.c
    M include/qemu/job.h
    M job.c

  Log Message:
  -----------
  blockjob: Lie better in child_job_drained_poll()

Block jobs claim in .drained_poll() that they are in a quiescent state
as soon as job->deferred_to_main_loop is true. This is obviously wrong,
they still have a completion BH to run. We only get away with this
because commit 91af091f923 added an unconditional aio_poll(false) to the
drain functions, but this is bypassing the regular drain mechanisms.

However, just removing this and telling that the job is still active
doesn't work either: The completion callbacks themselves call drain
functions (directly, or indirectly with bdrv_reopen), so they would
deadlock then.

As a better lie, tell that the job is active as long as the BH is
pending, but falsely call it quiescent from the point in the BH when the
completion callback is called. At this point, nested drain calls won't
deadlock because they ignore the job, and outer drains will wait for the
job to really reach a quiescent state because the callback is already
running.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 4cf077b59fc73eec29f8b7d082919dbb278bdc86
      
https://github.com/qemu/qemu/commit/4cf077b59fc73eec29f8b7d082919dbb278bdc86
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block/io.c

  Log Message:
  -----------
  block: Remove aio_poll() in bdrv_drain_poll variants

bdrv_drain_poll_top_level() was buggy because it didn't release the
AioContext lock of the node to be drained before calling aio_poll().
This way, callbacks called by aio_poll() would possibly take the lock a
second time and run into a deadlock with a nested AIO_WAIT_WHILE() call.

However, it turns out that the aio_poll() call isn't actually needed any
more. It was introduced in commit 91af091f923, which is effectively
reverted by this patch. The cases it was supposed to fix are now covered
by bdrv_drain_poll(), which waits for block jobs to reach a quiescent
state.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: ecc1a5c790cf2c7732cb9755ca388c2fe108d1a1
      
https://github.com/qemu/qemu/commit/ecc1a5c790cf2c7732cb9755ca388c2fe108d1a1
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Test nested poll in bdrv_drain_poll_top_level()

This is a regression test for a deadlock that could occur in callbacks
called from the aio_poll() in bdrv_drain_poll_top_level(). The
AioContext lock wasn't released and therefore would be taken a second
time in the callback. This would cause a possible AIO_WAIT_WHILE() in
the callback to hang.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Fam Zheng <address@hidden>


  Commit: 644f3a29bd4974aefd46d2adb5062d86063c8a50
      
https://github.com/qemu/qemu/commit/644f3a29bd4974aefd46d2adb5062d86063c8a50
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M job.c

  Log Message:
  -----------
  job: Avoid deadlocks in job_completed_txn_abort()

Amongst others, job_finalize_single() calls the .prepare/.commit/.abort
callbacks of the individual job driver. Recently, their use was adapted
for all block jobs so that they involve code calling AIO_WAIT_WHILE()
now. Such code must be called under the AioContext lock for the
respective job, but without holding any other AioContext lock.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: d49725af46a7710cde02cc120b7f1e485154b483
      
https://github.com/qemu/qemu/commit/d49725af46a7710cde02cc120b7f1e485154b483
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: AIO_WAIT_WHILE() in job .commit/.abort

This adds tests for calling AIO_WAIT_WHILE() in the .commit and .abort
callbacks. Both reasons why .abort could be called for a single job are
tested: Either .run or .prepare could return an error.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 5599c162c3bec2bc8f0123e4d5802a70d9984b3b
      
https://github.com/qemu/qemu/commit/5599c162c3bec2bc8f0123e4d5802a70d9984b3b
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Fix outdated comments

Commit 89bd030533e changed the test case from using job_sleep_ns() to
using qemu_co_sleep_ns() instead. Also, block_job_sleep_ns() became
job_sleep_ns() in commit 5d43e86e11f.

In both cases, some comments in the test case were not updated. Do that
now.

Reported-by: Max Reitz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Eric Blake <address@hidden>


  Commit: cfe29d8294e06420e15d4938421ae006c8ac49e7
      
https://github.com/qemu/qemu/commit/cfe29d8294e06420e15d4938421ae006c8ac49e7
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block.c
    M block/block-backend.c
    M block/io.c
    M blockjob.c
    M include/block/aio-wait.h
    M include/block/block.h
    M include/block/block_int.h
    M include/block/blockjob.h
    M job.c
    M util/aio-wait.c

  Log Message:
  -----------
  block: Use a single global AioWait

When draining a block node, we recurse to its parent and for subtree
drains also to its children. A single AIO_WAIT_WHILE() is then used to
wait for bdrv_drain_poll() to become true, which depends on all of the
nodes we recursed to. However, if the respective child or parent becomes
quiescent and calls bdrv_wakeup(), only the AioWait of the child/parent
is checked, while AIO_WAIT_WHILE() depends on the AioWait of the
original node.

Fix this by using a single AioWait for all callers of AIO_WAIT_WHILE().

This may mean that the draining thread gets a few more unnecessary
wakeups because an unrelated operation got completed, but we already
wake it up when something _could_ have changed rather than only if it
has certainly changed.

Apart from that, drain is a slow path anyway. In theory it would be
possible to use wakeups more selectively and still correctly, but the
gains are likely not worth the additional complexity. In fact, this
patch is a nice simplification for some places in the code.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Eric Blake <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: d8b3afd597d54e496809b05ac39ac29a5799664f
      
https://github.com/qemu/qemu/commit/d8b3afd597d54e496809b05ac39ac29a5799664f
  Author: Kevin Wolf <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  test-bdrv-drain: Test draining job source child and parent

For the block job drain test, don't only test draining the source and
the target node, but create a backing chain for the source
(source_backing <- source <- source_overlay) and test draining each of
the nodes in it.

When using iothreads, the source node (and therefore the job) is in a
different AioContext than the drain, which happens from the main
thread. This way, the main thread waits in AIO_WAIT_WHILE() for the
iothread to make process and aio_wait_kick() is required to notify it.
The test validates that calling bdrv_wakeup() for a child or a parent
node will actually notify AIO_WAIT_WHILE() instead of letting it hang.

Increase the sleep time a bit (to 1 ms) because the test case is racy
and with the shorter sleep, it didn't reproduce the bug it is supposed
to test for me under 'rr record -n'.

This was because bdrv_drain_invoke_entry() (in the main thread) was only
called after the job had already reached the pause point, so we got a
bdrv_dec_in_flight() from the main thread and the additional
aio_wait_kick() when the job becomes idle (that we really wanted to test
here) wasn't even necessary any more to make progress.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Eric Blake <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 9c76ff9c16be890e70fce30754b096ff9950d1ee
      
https://github.com/qemu/qemu/commit/9c76ff9c16be890e70fce30754b096ff9950d1ee
  Author: Max Reitz <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block.c
    M block/block-backend.c
    M block/io.c
    M block/linux-aio.c
    M blockdev.c
    M blockjob.c
    M include/block/aio-wait.h
    M include/block/block.h
    M include/block/block_int.h
    M include/block/blockjob.h
    M include/qemu/coroutine.h
    M include/qemu/job.h
    M job.c
    M qapi/block-core.json
    M tests/qemu-iotests/040
    M tests/qemu-iotests/040.out
    M tests/qemu-iotests/051
    M tests/qemu-iotests/051.out
    M tests/qemu-iotests/051.pc.out
    M tests/test-bdrv-drain.c
    M tests/test-blockjob.c
    M util/aio-wait.c
    M util/async.c
    M util/qemu-coroutine.c

  Log Message:
  -----------
  Merge remote-tracking branch 'kevin/tags/for-upstream' into block

Block layer patches:

- Fix some jobs/drain/aio_poll related hangs
- commit: Add top-node/base-node options
- linux-aio: Fix locking for qemu_laio_process_completions()
- Fix use after free error in bdrv_open_inherit

# gpg: Signature made Tue Sep 25 15:54:01 2018 CEST
# gpg:                using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <address@hidden>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74  56FE 7F09 B272 C88F 2FD6

* kevin/tags/for-upstream: (26 commits)
  test-bdrv-drain: Test draining job source child and parent
  block: Use a single global AioWait
  test-bdrv-drain: Fix outdated comments
  test-bdrv-drain: AIO_WAIT_WHILE() in job .commit/.abort
  job: Avoid deadlocks in job_completed_txn_abort()
  test-bdrv-drain: Test nested poll in bdrv_drain_poll_top_level()
  block: Remove aio_poll() in bdrv_drain_poll variants
  blockjob: Lie better in child_job_drained_poll()
  block-backend: Decrease in_flight only after callback
  block-backend: Fix potential double blk_delete()
  block-backend: Add .drained_poll callback
  block: Add missing locking in bdrv_co_drain_bh_cb()
  test-bdrv-drain: Test AIO_WAIT_WHILE() in completion callback
  job: Use AIO_WAIT_WHILE() in job_finish_sync()
  test-blockjob: Acquire AioContext around job_cancel_sync()
  test-bdrv-drain: Drain with block jobs in an I/O thread
  aio-wait: Increase num_waiters even in home thread
  blockjob: Wake up BDS when job becomes idle
  job: Fix missing locking due to mismerge
  job: Fix nested aio_poll() hanging in job_txn_apply
  ...

Signed-off-by: Max Reitz <address@hidden>


  Commit: c5e4e49258e9b89cb34c085a419dd9f862935c48
      
https://github.com/qemu/qemu/commit/c5e4e49258e9b89cb34c085a419dd9f862935c48
  Author: Peter Maydell <address@hidden>
  Date:   2018-09-25 (Tue, 25 Sep 2018)

  Changed paths:
    M block.c
    M block/block-backend.c
    M block/commit.c
    M block/io.c
    M block/linux-aio.c
    M block/mirror.c
    M block/stream.c
    M blockdev.c
    M blockjob.c
    M hmp.c
    M include/block/aio-wait.h
    M include/block/block.h
    M include/block/block_int.h
    M include/block/blockjob.h
    M include/qemu/coroutine.h
    M include/qemu/job.h
    M job.c
    M qapi/block-core.json
    M tests/qemu-iotests/040
    M tests/qemu-iotests/040.out
    M tests/qemu-iotests/051
    M tests/qemu-iotests/051.out
    M tests/qemu-iotests/051.pc.out
    M tests/test-bdrv-drain.c
    M tests/test-blockjob-txn.c
    M tests/test-blockjob.c
    M util/aio-wait.c
    M util/async.c
    M util/qemu-coroutine.c

  Log Message:
  -----------
  Merge remote-tracking branch 'remotes/xanclic/tags/pull-block-2018-09-25' 
into staging

Block layer patches:
- Drain fixes
- node-name parameters for block-commit
- Refactor block jobs to use transactional callbacks for exiting

# gpg: Signature made Tue 25 Sep 2018 16:12:44 BST
# gpg:                using RSA key F407DB0061D5CF40
# gpg: Good signature from "Max Reitz <address@hidden>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1  1829 F407 DB00 61D5 CF40

* remotes/xanclic/tags/pull-block-2018-09-25: (42 commits)
  test-bdrv-drain: Test draining job source child and parent
  block: Use a single global AioWait
  test-bdrv-drain: Fix outdated comments
  test-bdrv-drain: AIO_WAIT_WHILE() in job .commit/.abort
  job: Avoid deadlocks in job_completed_txn_abort()
  test-bdrv-drain: Test nested poll in bdrv_drain_poll_top_level()
  block: Remove aio_poll() in bdrv_drain_poll variants
  blockjob: Lie better in child_job_drained_poll()
  block-backend: Decrease in_flight only after callback
  block-backend: Fix potential double blk_delete()
  block-backend: Add .drained_poll callback
  block: Add missing locking in bdrv_co_drain_bh_cb()
  test-bdrv-drain: Test AIO_WAIT_WHILE() in completion callback
  job: Use AIO_WAIT_WHILE() in job_finish_sync()
  test-blockjob: Acquire AioContext around job_cancel_sync()
  test-bdrv-drain: Drain with block jobs in an I/O thread
  aio-wait: Increase num_waiters even in home thread
  blockjob: Wake up BDS when job becomes idle
  job: Fix missing locking due to mismerge
  job: Fix nested aio_poll() hanging in job_txn_apply
  ...

Signed-off-by: Peter Maydell <address@hidden>


Compare: https://github.com/qemu/qemu/compare/0a736f7ab83d...c5e4e49258e9
      **NOTE:** This service has been marked for deprecation: 
https://developer.github.com/changes/2018-04-25-github-services-deprecation/

      Functionality will be removed from GitHub.com on January 31st, 2019.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]