qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] dfa26a: iotests/118: Test media change for sc


From: Peter Maydell
Subject: [Qemu-commits] [qemu/qemu] dfa26a: iotests/118: Test media change for scsi-cd
Date: Fri, 16 Aug 2019 09:21:06 -0700

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: dfa26a110c7e88887ed5732c834ed5c1d22bd2e6
      
https://github.com/qemu/qemu/commit/dfa26a110c7e88887ed5732c834ed5c1d22bd2e6
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M tests/qemu-iotests/118
    M tests/qemu-iotests/118.out

  Log Message:
  -----------
  iotests/118: Test media change for scsi-cd

The test covered only floppy and ide-cd. Add scsi-cd as well.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: dfc828941c0f1771c5861fce11a428c8c64a206d
      
https://github.com/qemu/qemu/commit/dfc828941c0f1771c5861fce11a428c8c64a206d
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M tests/qemu-iotests/118

  Log Message:
  -----------
  iotests/118: Create test classes dynamically

We're getting a ridiculous number of child classes of
TestInitiallyFilled and TestInitiallyEmpty that differ only in a few
attributes that we want to test in all combinations.

Instead of explicitly writing down every combination, let's use a loop
and create those classes dynamically.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 19462c4bdd3bab7d277aafbdc178daa6ca074c9c
      
https://github.com/qemu/qemu/commit/19462c4bdd3bab7d277aafbdc178daa6ca074c9c
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M tests/qemu-iotests/118
    M tests/qemu-iotests/118.out

  Log Message:
  -----------
  iotests/118: Add -blockdev based tests

The code path for -device drive=<node-name> or without a drive=...
option for empty drives, which is supposed to be used with -blockdev
differs enough from the -drive based path with a user-owned
BlockBackend, so we want to test both paths at least for the basic tests
implemented by TestInitiallyFilled and TestInitiallyEmpty.

This would have caught the bug recently fixed for inserting read-only
nodes into a scsi-cd created without a drive=... option.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 980448f17a24573fea53ceec3fa66353e9fd4092
      
https://github.com/qemu/qemu/commit/980448f17a24573fea53ceec3fa66353e9fd4092
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M tests/qemu-iotests/234
    M tests/qemu-iotests/iotests.py

  Log Message:
  -----------
  iotests: Move migration helpers to iotests.py

234 implements functions that are useful for doing migration between two
VMs. Move them to iotests.py so that other test cases can use them, too.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 5b96e6a002d8791d24ef69ed67ee6d264239622d
      
https://github.com/qemu/qemu/commit/5b96e6a002d8791d24ef69ed67ee6d264239622d
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    A tests/qemu-iotests/262
    A tests/qemu-iotests/262.out
    M tests/qemu-iotests/group

  Log Message:
  -----------
  iotests: Test migration with all kinds of filter nodes

This test case is motivated by commit 2b23f28639 ('block/copy-on-read:
Fix permissions for inactive node'). Instead of just testing
copy-on-read on migration, let's stack all sorts of filter nodes on top
of each other and try if the resulting VM can still migrate
successfully. For good measure, put everything into an iothread, because
why not?

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: e444fa83126d56f3feafe3c3397c149cac82ce96
      
https://github.com/qemu/qemu/commit/e444fa83126d56f3feafe3c3397c149cac82ce96
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M block.c

  Log Message:
  -----------
  block: Simplify bdrv_filter_default_perms()

The same change as commit 2b23f28639 ('block/copy-on-read: Fix
permissions for inactive node') made for the copy-on-read driver can be
made for bdrv_filter_default_perms(): Retaining the old permissions from
the BdrvChild if it is given complicates things unnecessarily when in
the end this only means that the options set in the c == NULL case (i.e.
during child creation) are retained.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Eric Blake <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: 637d54a5f3a1965fa6aea9e41c4f84abf7a36de9
      
https://github.com/qemu/qemu/commit/637d54a5f3a1965fa6aea9e41c4f84abf7a36de9
  Author: Max Reitz <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M block.c

  Log Message:
  -----------
  block: Keep subtree drained in drop_intermediate

bdrv_drop_intermediate() calls BdrvChildRole.update_filename().  That
may poll, thus changing the graph, which potentially breaks the
QLIST_FOREACH_SAFE() loop.

Just keep the whole subtree drained.  This is probably the right thing
to do anyway (dropping nodes while the subtree is not drained seems
wrong).

Signed-off-by: Max Reitz <address@hidden>
Reviewed-by: Kevin Wolf <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: debc29276714756f278d98ae1929d7813f04030c
      
https://github.com/qemu/qemu/commit/debc29276714756f278d98ae1929d7813f04030c
  Author: Max Reitz <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M block.c

  Log Message:
  -----------
  block: Reduce (un)drains when replacing a child

Currently, bdrv_replace_child_noperm() undrains the parent until it is
completely undrained, then re-drains it after attaching the new child
node.

This is a problem with bdrv_drop_intermediate(): We want to keep the
whole subtree drained, including parents, while the operation is
under way.  bdrv_replace_child_noperm() breaks this by allowing every
parent to become unquiesced briefly, and then redraining it.

In fact, there is no reason why the parent should become unquiesced and
be allowed to submit requests to the new child node if that new node is
supposed to be kept drained.  So if anything, we have to drain the
parent before detaching the old child node.  Conversely, we have to
undrain it only after attaching the new child node.

Thus, change the whole drain algorithm here: Calculate the number of
times we have to drain/undrain the parent before replacing the child
node then drain it (if necessary), replace the child node, and then
undrain it.

Signed-off-by: Max Reitz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 9746b35cf3c1a60cf569d8caa434aa1ee919e63f
      
https://github.com/qemu/qemu/commit/9746b35cf3c1a60cf569d8caa434aa1ee919e63f
  Author: Max Reitz <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  tests: Test polling in bdrv_drop_intermediate()

Signed-off-by: Max Reitz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 0513f9841f14b918bd7aa7f1c8fb0ac3b0f6dea0
      
https://github.com/qemu/qemu/commit/0513f9841f14b918bd7aa7f1c8fb0ac3b0f6dea0
  Author: Max Reitz <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  tests: Test mid-drain bdrv_replace_child_noperm()

Add a test for what happens when you call bdrv_replace_child_noperm()
for various drain situations ({old,new} child {drained,not drained}).

Most importantly, if both the old and the new child are drained, the
parent must not be undrained at any point.

Signed-off-by: Max Reitz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 48057fc2b4f010a5c92035d223a0cceed3635237
      
https://github.com/qemu/qemu/commit/48057fc2b4f010a5c92035d223a0cceed3635237
  Author: Max Reitz <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    A tests/qemu-iotests/258
    A tests/qemu-iotests/258.out
    M tests/qemu-iotests/group

  Log Message:
  -----------
  iotests: Add test for concurrent stream/commit

We already have 030 for that in general, but this tests very specific
cases of both jobs finishing concurrently.

Signed-off-by: Max Reitz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 421919d76b1755e1f9478f85cb71829ca5150ade
      
https://github.com/qemu/qemu/commit/421919d76b1755e1f9478f85cb71829ca5150ade
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M block/block-backend.c
    M hw/block/hd-geometry.c
    M include/sysemu/block-backend.h

  Log Message:
  -----------
  block: Remove blk_pread_unthrottled()

The functionality offered by blk_pread_unthrottled() goes back to commit
498e386c584. Then, we couldn't perform I/O throttling with synchronous
requests because timers wouldn't be executed in polling loops. So the
commit automatically disabled I/O throttling as soon as a synchronous
request was issued.

However, for geometry detection during disk initialisation, we always
used (and still use) synchronous requests even if guest requests use AIO
later. Geometry detection was not wanted to disable I/O throttling, so
bdrv_pread_unthrottled() was introduced which disabled throttling only
temporarily.

All of this isn't necessary any more because we do run timers in polling
loop and even synchronous requests are now using coroutine
infrastructure internally. For this reason, commit 90c78624f already
removed the automatic disabling of I/O throttling.

It's time to get rid of the workaround for the removed code, and its
abuse of blk_root_drained_begin()/end(), as well.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: d2da5e288a2e71e82866c8fdefd41b5727300124
      
https://github.com/qemu/qemu/commit/d2da5e288a2e71e82866c8fdefd41b5727300124
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M block/mirror.c

  Log Message:
  -----------
  mirror: Keep mirror_top_bs drained after dropping permissions

mirror_top_bs is currently implicitly drained through its connection to
the source or the target node. However, the drain section for target_bs
ends early after moving mirror_top_bs from src to target_bs, so that
requests can already be restarted while mirror_top_bs is still present
in the chain, but has dropped all permissions and therefore runs into an
assertion failure like this:

    qemu-system-x86_64: block/io.c:1634: bdrv_co_write_req_prepare:
    Assertion `child->perm & BLK_PERM_WRITE' failed.

Keep mirror_top_bs drained until all graph changes have completed.

Cc: address@hidden
Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: cf3129323f900ef5ddbccbe86e4fa801e88c566e
      
https://github.com/qemu/qemu/commit/cf3129323f900ef5ddbccbe86e4fa801e88c566e
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M block/backup.c
    M block/block-backend.c
    M block/commit.c
    M block/mirror.c
    M blockjob.c
    M include/sysemu/block-backend.h
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  block-backend: Queue requests while drained

This fixes devices like IDE that can still start new requests from I/O
handlers in the CPU thread while the block backend is drained.

The basic assumption is that in a drain section, no new requests should
be allowed through a BlockBackend (blk_drained_begin/end don't exist,
we get drain sections only on the node level). However, there are two
special cases where requests should not be queued:

1. Block jobs: We already make sure that block jobs are paused in a
   drain section, so they won't start new requests. However, if the
   drain_begin is called on the job's BlockBackend first, it can happen
   that we deadlock because the job stays busy until it reaches a pause
   point - which it can't if its requests aren't processed any more.

   The proper solution here would be to make all requests through the
   job's filter node instead of using a BlockBackend. For now, just
   disabling request queuing on the job BlockBackend is simpler.

2. In test cases where making requests through bdrv_* would be
   cumbersome because we'd need a BdrvChild. As we already got the
   functionality to disable request queuing from 1., use it in tests,
   too, for convenience.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>


  Commit: ffd8e8ffd5410ecea17c62785bf445b7ecddd8d6
      
https://github.com/qemu/qemu/commit/ffd8e8ffd5410ecea17c62785bf445b7ecddd8d6
  Author: Kevin Wolf <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M qemu-deprecated.texi
    M qemu-img.c

  Log Message:
  -----------
  qemu-img convert: Deprecate using -n and -o together

bdrv_create options specified with -o have no effect when skipping image
creation with -n, so this doesn't make sense. Warn against the misuse
and deprecate the combination so we can make it a hard error later.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Reviewed-by: John Snow <address@hidden>
Reviewed-by: Eric Blake <address@hidden>


  Commit: a6b257a08e3d72219f03e461a52152672fec0612
      
https://github.com/qemu/qemu/commit/a6b257a08e3d72219f03e461a52152672fec0612
  Author: Nir Soffer <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M block/file-posix.c

  Log Message:
  -----------
  file-posix: Handle undetectable alignment

In some cases buf_align or request_alignment cannot be detected:

1. With Gluster, buf_align cannot be detected since the actual I/O is
   done on Gluster server, and qemu buffer alignment does not matter.
   Since we don't have alignment requirement, buf_align=1 is the best
   value.

2. With local XFS filesystem, buf_align cannot be detected if reading
   from unallocated area. In this we must align the buffer, but we don't
   know what is the correct size. Using the wrong alignment results in
   I/O error.

3. With Gluster backed by XFS, request_alignment cannot be detected if
   reading from unallocated area. In this case we need to use the
   correct alignment, and failing to do so results in I/O errors.

4. With NFS, the server does not use direct I/O, so both buf_align cannot
   be detected. In this case we don't need any alignment so we can use
   buf_align=1 and request_alignment=1.

These cases seems to work when storage sector size is 512 bytes, because
the current code starts checking align=512. If the check succeeds
because alignment cannot be detected we use 512. But this does not work
for storage with 4k sector size.

To determine if we can detect the alignment, we probe first with
align=1. If probing succeeds, maybe there are no alignment requirement
(cases 1, 4) or we are probing unallocated area (cases 2, 3). Since we
don't have any way to tell, we treat this as undetectable alignment. If
probing with align=1 fails with EINVAL, but probing with one of the
expected alignments succeeds, we know that we found a working alignment.

Practically the alignment requirements are the same for buffer
alignment, buffer length, and offset in file. So in case we cannot
detect buf_align, we can use request alignment. If we cannot detect
request alignment, we can fallback to a safe value. To use this logic,
we probe first request alignment instead of buf_align.

Here is a table showing the behaviour with current code (the value in
parenthesis is the optimal value).

Case    Sector    buf_align (opt)   request_alignment (opt)     result
======================================================================
1       512       512   (1)          512   (512)                 OK
1       4096      512   (1)          4096  (4096)                FAIL
----------------------------------------------------------------------
2       512       512   (512)        512   (512)                 OK
2       4096      512   (4096)       4096  (4096)                FAIL
----------------------------------------------------------------------
3       512       512   (1)          512   (512)                 OK
3       4096      512   (1)          512   (4096)                FAIL
----------------------------------------------------------------------
4       512       512   (1)          512   (1)                   OK
4       4096      512   (1)          512   (1)                   OK

Same cases with this change:

Case    Sector    buf_align (opt)   request_alignment (opt)     result
======================================================================
1       512       512   (1)          512   (512)                 OK
1       4096      4096  (1)          4096  (4096)                OK
----------------------------------------------------------------------
2       512       512   (512)        512   (512)                 OK
2       4096      4096  (4096)       4096  (4096)                OK
----------------------------------------------------------------------
3       512       4096  (1)          4096  (512)                 OK
3       4096      4096  (1)          4096  (4096)                OK
----------------------------------------------------------------------
4       512       4096  (1)          4096  (1)                   OK
4       4096      4096  (1)          4096  (1)                   OK

I tested that provisioning VMs and copying disks on local XFS and
Gluster with 4k bytes sector size work now, resolving bugs [1],[2].
I tested also on XFS, NFS, Gluster with 512 bytes sector size.

[1] https://bugzilla.redhat.com/1737256
[2] https://bugzilla.redhat.com/1738657

Signed-off-by: Nir Soffer <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: e018ccb3fbfa1d446ca7b49266c8a80dce40612d
      
https://github.com/qemu/qemu/commit/e018ccb3fbfa1d446ca7b49266c8a80dce40612d
  Author: Peter Maydell <address@hidden>
  Date:   2019-08-16 (Fri, 16 Aug 2019)

  Changed paths:
    M block.c
    M block/backup.c
    M block/block-backend.c
    M block/commit.c
    M block/file-posix.c
    M block/mirror.c
    M blockjob.c
    M hw/block/hd-geometry.c
    M include/sysemu/block-backend.h
    M qemu-deprecated.texi
    M qemu-img.c
    M tests/qemu-iotests/118
    M tests/qemu-iotests/118.out
    M tests/qemu-iotests/234
    A tests/qemu-iotests/258
    A tests/qemu-iotests/258.out
    A tests/qemu-iotests/262
    A tests/qemu-iotests/262.out
    M tests/qemu-iotests/group
    M tests/qemu-iotests/iotests.py
    M tests/test-bdrv-drain.c

  Log Message:
  -----------
  Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging

Block layer patches:

- file-posix: Fix O_DIRECT alignment detection
- Fixes for concurrent block jobs
- block-backend: Queue requests while drained (fix IDE vs. job crashes)
- qemu-img convert: Deprecate using -n and -o together
- iotests: Migration tests with filter nodes
- iotests: More media change tests

# gpg: Signature made Fri 16 Aug 2019 10:29:18 BST
# gpg:                using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <address@hidden>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74  56FE 7F09 B272 C88F 2FD6

* remotes/kevin/tags/for-upstream:
  file-posix: Handle undetectable alignment
  qemu-img convert: Deprecate using -n and -o together
  block-backend: Queue requests while drained
  mirror: Keep mirror_top_bs drained after dropping permissions
  block: Remove blk_pread_unthrottled()
  iotests: Add test for concurrent stream/commit
  tests: Test mid-drain bdrv_replace_child_noperm()
  tests: Test polling in bdrv_drop_intermediate()
  block: Reduce (un)drains when replacing a child
  block: Keep subtree drained in drop_intermediate
  block: Simplify bdrv_filter_default_perms()
  iotests: Test migration with all kinds of filter nodes
  iotests: Move migration helpers to iotests.py
  iotests/118: Add -blockdev based tests
  iotests/118: Create test classes dynamically
  iotests/118: Test media change for scsi-cd

Signed-off-by: Peter Maydell <address@hidden>


Compare: https://github.com/qemu/qemu/compare/c6a2225a5a5e...e018ccb3fbfa



reply via email to

[Prev in Thread] Current Thread [Next in Thread]