qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Unable to start/deploy VMs after Qemu/Gluster upgrade


From: Martin Toth
Subject: Re: [Qemu-discuss] Unable to start/deploy VMs after Qemu/Gluster upgrade to 2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1
Date: Sun, 27 Nov 2016 16:40:28 +0100

I've solved the problem by upgrading gluster to these versions on Ubuntu 14 -

glusterfs-client - 3.8.6-ubuntu1~trusty1
glusterfs-common - 3.8.6-ubuntu1~trusty1
glusterfs-server - 3.8.6-ubuntu1~trusty1
qemu-keymaps - 2.0.0+dfsg-2ubuntu1.30glusterfs3.8.6trusty1
qemu-kvm - 2.0.0+dfsg-2ubuntu1.30glusterfs3.8.6trusty1
qemu-system-common - 2.0.0+dfsg-2ubuntu1.30glusterfs3.8.6trusty1
qemu-system-x86 - 2.0.0+dfsg-2ubuntu1.30glusterfs3.8.6trusty1
qemu-utils - 2.0.0+dfsg-2ubuntu1.30glusterfs3.8.6trusty1

Gluste now works fine even with Qemu using LibGfsApi.

BR,
Martin

On 25 Nov 2016, at 17:54, Martin Toth <address@hidden> wrote:

Hi,

thanks for suggestion, but my VM disk file (qcow2 image) is 512 bytes aligned (afaik).

address@hidden:/mnt/datastore0/64# stat disk.1
 File: ‘disk.1.backup’
 Size: 5228527616 Blocks: 10211968   IO Block: 131072 regular file
Device: 21h/33d Inode: 12589621850231693320  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-11-25 17:34:28.992251577 +0100
Modify: 2016-11-25 17:34:28.596250240 +0100
Change: 2016-11-25 17:34:28.924251347 +0100
Birth: -

address@hidden:/mnt/datastore0/64# qemu-img info disk.1
image: disk.1.backup
file format: qcow2
virtual size: 10G (10747904000 bytes)
disk size: 4.9G
cluster_size: 65536
Format specific information:
   compat: 1.1
   lazy refcounts: false

More over this Qemu-img create causes SegmentationFault. Log below.

address@hidden:/# qemu-img create -f qcow2 gluster://node1/vmvol/64/foo.qcow2 1G
Formatting 'gluster://node1/vmvol/64/foo.qcow2', fmt=qcow2 size=1073741824 encryption=off cluster_size=65536 lazy_refcounts=off
[2016-11-25 16:52:22.536760] I [MSGID: 104045] [glfs-master.c:96:notify] 0-gfapi: New graph 6e6f6465-312d-3331-3934-352d32303136 (0) coming up
[2016-11-25 16:52:22.536792] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-0: parent translators are ready, attempting connect on transport
[2016-11-25 16:52:22.537022] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-1: parent translators are ready, attempting connect on transport
[2016-11-25 16:52:22.537110] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-0: changing port to 49152 (from 0)
[2016-11-25 16:52:22.537198] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-2: parent translators are ready, attempting connect on transport
[2016-11-25 16:52:22.537511] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 16:52:22.537677] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-1: changing port to 49152 (from 0)
[2016-11-25 16:52:22.537682] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-0: Connected to vmvol-client-0, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 16:52:22.537705] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 16:52:22.537744] I [MSGID: 108005] [afr-common.c:4299:afr_notify] 0-vmvol-replicate-0: Subvolume 'vmvol-client-0' came back up; going online.
[2016-11-25 16:52:22.537769] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-0: Server lk version = 1
[2016-11-25 16:52:22.537950] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-2: changing port to 49152 (from 0)
[2016-11-25 16:52:22.538096] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 16:52:22.538316] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-1: Connected to vmvol-client-1, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 16:52:22.538332] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 16:52:22.538439] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-1: Server lk version = 1
[2016-11-25 16:52:22.538548] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 16:52:22.538984] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-2: Connected to vmvol-client-2, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 16:52:22.538993] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-2: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 16:52:22.543005] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-2: Server lk version = 1
[2016-11-25 16:52:22.543584] I [MSGID: 108031] [afr-common.c:2071:afr_local_discovery_cbk] 0-vmvol-replicate-0: selecting local read_child vmvol-client-0
[2016-11-25 16:52:22.544396] I [MSGID: 104041] [glfs-resolve.c:890:__glfs_active_subvol] 0-vmvol: switched to graph 6e6f6465-312d-3331-3934-352d32303136 (0)
[2016-11-25 16:52:22.657150] I [MSGID: 114021] [client.c:2122:notify] 0-vmvol-client-0: current graph is no longer active, destroying rpc_client
[2016-11-25 16:52:22.657177] I [MSGID: 114021] [client.c:2122:notify] 0-vmvol-client-1: current graph is no longer active, destroying rpc_client
[2016-11-25 16:52:22.657195] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-vmvol-client-0: disconnected from vmvol-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2016-11-25 16:52:22.657199] I [MSGID: 114021] [client.c:2122:notify] 0-vmvol-client-2: current graph is no longer active, destroying rpc_client
[2016-11-25 16:52:22.657217] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-vmvol-client-1: disconnected from vmvol-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2016-11-25 16:52:22.657246] W [MSGID: 108001] [afr-common.c:4379:afr_notify] 0-vmvol-replicate-0: Client-quorum is not met
[2016-11-25 16:52:22.657246] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-vmvol-client-2: disconnected from vmvol-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2016-11-25 16:52:22.657270] E [MSGID: 108006] [afr-common.c:4321:afr_notify] 0-vmvol-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2016-11-25 16:52:22.657350] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-gfapi: size=84 max=2 total=2
[2016-11-25 16:52:22.657488] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-gfapi: size=156 max=3 total=3
[2016-11-25 16:52:22.657599] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-gfapi: size=108 max=1 total=1
[2016-11-25 16:52:22.657611] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-client-0: size=1300 max=2 total=19
[2016-11-25 16:52:22.657620] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-client-1: size=1300 max=2 total=19
[2016-11-25 16:52:22.657628] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-client-2: size=1300 max=3 total=19
[2016-11-25 16:52:22.657636] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-replicate-0: size=10556 max=5 total=14
[2016-11-25 16:52:22.657705] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-dht: size=1148 max=0 total=0
[2016-11-25 16:52:22.657740] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-dht: size=2316 max=2 total=8
[2016-11-25 16:52:22.657799] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-readdir-ahead: size=60 max=0 total=0
[2016-11-25 16:52:22.657808] I [io-stats.c:2951:fini] 0-vmvol: io-stats translator unloaded
[2016-11-25 16:52:22.657843] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 1
[2016-11-25 16:52:22.657843] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 2
[2016-11-25 16:52:23.536847] I [MSGID: 104045] [glfs-master.c:96:notify] 0-gfapi: New graph 6e6f6465-312d-3331-3934-352d32303136 (0) coming up
[2016-11-25 16:52:23.536877] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-0: parent translators are ready, attempting connect on transport
[2016-11-25 16:52:23.537166] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-1: parent translators are ready, attempting connect on transport
[2016-11-25 16:52:23.537259] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-0: changing port to 49152 (from 0)
[2016-11-25 16:52:23.537350] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-2: parent translators are ready, attempting connect on transport
[2016-11-25 16:52:23.537594] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 16:52:23.537758] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-1: changing port to 49152 (from 0)
[2016-11-25 16:52:23.537761] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-0: Connected to vmvol-client-0, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 16:52:23.537784] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 16:52:23.537826] I [MSGID: 108005] [afr-common.c:4299:afr_notify] 0-vmvol-replicate-0: Subvolume 'vmvol-client-0' came back up; going online.
[2016-11-25 16:52:23.537852] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-0: Server lk version = 1
[2016-11-25 16:52:23.538040] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-2: changing port to 49152 (from 0)
[2016-11-25 16:52:23.538179] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 16:52:23.538384] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-1: Connected to vmvol-client-1, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 16:52:23.538399] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 16:52:23.538508] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-1: Server lk version = 1
[2016-11-25 16:52:23.538612] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 16:52:23.538994] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-2: Connected to vmvol-client-2, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 16:52:23.539004] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-2: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 16:52:23.543058] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-2: Server lk version = 1
[2016-11-25 16:52:23.543606] I [MSGID: 108031] [afr-common.c:2071:afr_local_discovery_cbk] 0-vmvol-replicate-0: selecting local read_child vmvol-client-0
[2016-11-25 16:52:23.544280] I [MSGID: 104041] [glfs-resolve.c:890:__glfs_active_subvol] 0-vmvol: switched to graph 6e6f6465-312d-3331-3934-352d32303136 (0)
Segmentation fault (core dumped)

BR,
Martin

On 25 Nov 2016, at 16:29, Thomas Lamprecht <address@hidden> wrote:

Hi,


On 11/25/2016 04:13 PM, Martin Toth wrote:
Hello all,

we are using your qemu packages to deploy qemu VMs on our gluster via gfsapi.
Recent upgrade broken our qemu and we are not able to deploy / start VMs anymore.

Gluster is running OK, mounted with FUSE, everything looks ok, there is probably some problem with qemu while accessing gluster with gfsapi.

These are our current versions (Qemu is from https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 ):
ii  glusterfs-client                     3.7.17-ubuntu1~trusty2                        amd64        clustered file-system (client package)
ii  glusterfs-common                     3.7.17-ubuntu1~trusty2                        amd64        GlusterFS common libraries and translator modules
ii  glusterfs-server                     3.7.17-ubuntu1~trusty2                        amd64        clustered file-system (server package)
ii  qemu-keymaps                         2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 all          QEMU keyboard maps
ii  qemu-kvm                           2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 amd64        QEMU Full virtualization
ii  qemu-system-common                2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 amd64        QEMU full system emulation binaries (common files)
ii  qemu-system-x86                      2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 amd64        QEMU full system emulation binaries (x86)
ii  qemu-utils                           2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1 amd64        QEMU utilities

We see error attached lower in mail. Do you have any suggestions what can cause this problem ?

Thanks in advance for your help.

Regards,
Martin

Volume Name: vmvol
Type: Replicate
Volume ID: a72b5c9e-b8ff-488e-b10f-5ba4b71e62b8
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1.storage.internal:/gluster/vmvol/brick1
Brick2: node2.storage.internal:/gluster/vmvol/brick1
Brick3: node3.storage.internal:/gluster/vmvol/brick1
Options Reconfigured:
cluster.self-heal-daemon: on
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
performance.stat-prefetch: on
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
cluster.eager-lock: enable
storage.owner-gid: 9869
storage.owner-uid: 9869
server.allow-insecure: on
performance.readdir-ahead: on

2016-11-25 12:54:06.121+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-x86_64 -name one-67 -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 3c703b26-3f57-44d0-8d76-bb281fd8902c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-67.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster://node1:24007/vmvol/67/disk.1,if=none,id=drive-ide0-0-0,format=qcow2,cache=none -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/var/lib/one//datastores/112/67/disk.0,if=none,id=drive-ide0-0-1,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1,bootindex=3 -drive file=/var/lib/one//datastores/112/67/disk.2,if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=24,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:0a:c8:64:1f,bus=pci.0,addr=0x3,bootindex=1 -vnc 0.0.0.0:67 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
[2016-11-25 12:54:06.254217] I [MSGID: 104045] [glfs-master.c:96:notify] 0-gfapi: New graph 6e6f6465-322d-3231-3330-392d32303136 (0) coming up
[2016-11-25 12:54:06.254246] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-0: parent translators are ready, attempting connect on transport
[2016-11-25 12:54:06.254473] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-1: parent translators are ready, attempting connect on transport
[2016-11-25 12:54:06.254672] I [MSGID: 114020] [client.c:2113:notify] 0-vmvol-client-2: parent translators are ready, attempting connect on transport
[2016-11-25 12:54:06.254844] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-0: changing port to 49152 (from 0)
[2016-11-25 12:54:06.255303] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 12:54:06.255391] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-2: changing port to 49152 (from 0)
[2016-11-25 12:54:06.255844] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 12:54:06.256259] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-2: Connected to vmvol-client-2, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 12:54:06.256268] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-2: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 12:54:06.256288] I [MSGID: 108005] [afr-common.c:4299:afr_notify] 0-vmvol-replicate-0: Subvolume 'vmvol-client-2' came back up; going online.
[2016-11-25 12:54:06.256464] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-2: Server lk version = 1
[2016-11-25 12:54:06.262716] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-0: Connected to vmvol-client-0, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 12:54:06.262729] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 12:54:06.262863] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-0: Server lk version = 1
[2016-11-25 12:54:06.267906] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 0-vmvol-client-1: changing port to 49152 (from 0)
[2016-11-25 12:54:06.268148] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-vmvol-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-25 12:54:06.287724] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-vmvol-client-1: Connected to vmvol-client-1, attached to remote volume '/gluster/vmvol/brick1'.
[2016-11-25 12:54:06.287734] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-vmvol-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-11-25 12:54:06.313439] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-vmvol-client-1: Server lk version = 1
[2016-11-25 12:54:06.314113] I [MSGID: 108031] [afr-common.c:2071:afr_local_discovery_cbk] 0-vmvol-replicate-0: selecting local read_child vmvol-client-1
[2016-11-25 12:54:06.314928] I [MSGID: 104041] [glfs-resolve.c:890:__glfs_active_subvol] 0-vmvol: switched to graph 6e6f6465-322d-3231-3330-392d32303136 (0)
[2016-11-25 12:54:16.331479] I [MSGID: 114021] [client.c:2122:notify] 0-vmvol-client-0: current graph is no longer active, destroying rpc_client
[2016-11-25 12:54:16.331507] I [MSGID: 114021] [client.c:2122:notify] 0-vmvol-client-1: current graph is no longer active, destroying rpc_client
[2016-11-25 12:54:16.331517] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-vmvol-client-0: disconnected from vmvol-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2016-11-25 12:54:16.331531] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-vmvol-client-1: disconnected from vmvol-client-1. Client process will keep trying to connect to glusterd until brick's port is available
[2016-11-25 12:54:16.331534] I [MSGID: 114021] [client.c:2122:notify] 0-vmvol-client-2: current graph is no longer active, destroying rpc_client
[2016-11-25 12:54:16.331543] W [MSGID: 108001] [afr-common.c:4379:afr_notify] 0-vmvol-replicate-0: Client-quorum is not met
[2016-11-25 12:54:16.331667] E [rpc-clnt.c:370:saved_frames_unwind] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f4dcee94502] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f4dcec642be] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f4dcec643ce] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f4dcec65aa4] (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x7f4dcec66360] ))))) 0-vmvol-client-2: forced unwinding frame type(GlusterFS 3.3) op(WRITE(13)) called at 2016-11-25 12:54:16.331128 (xid=0x2f62)
[2016-11-25 12:54:16.331684] W [MSGID: 114031] [client-rpc-fops.c:907:client3_3_writev_cbk] 0-vmvol-client-2: remote operation failed [Transport endpoint is not connected]
[2016-11-25 12:54:16.331701] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-vmvol-client-2: disconnected from vmvol-client-2. Client process will keep trying to connect to glusterd until brick's port is available
[2016-11-25 12:54:16.331710] E [MSGID: 108006] [afr-common.c:4321:afr_notify] 0-vmvol-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2016-11-25 12:54:16.331712] E [MSGID: 114031] [client-rpc-fops.c:1624:client3_3_inodelk_cbk] 0-vmvol-client-0: remote operation failed [Transport endpoint is not connected]
[2016-11-25 12:54:16.331727] E [MSGID: 114031] [client-rpc-fops.c:1624:client3_3_inodelk_cbk] 0-vmvol-client-1: remote operation failed [Transport endpoint is not connected]
[2016-11-25 12:54:16.331735] E [MSGID: 114031] [client-rpc-fops.c:1624:client3_3_inodelk_cbk] 0-vmvol-client-2: remote operation failed [Transport endpoint is not connected]
[2016-11-25 12:54:16.331749] E [MSGID: 114031] [client-rpc-fops.c:1624:client3_3_inodelk_cbk] 0-vmvol-client-0: remote operation failed [Transport endpoint is not connected]
[2016-11-25 12:54:16.331756] E [MSGID: 114031] [client-rpc-fops.c:1624:client3_3_inodelk_cbk] 0-vmvol-client-1: remote operation failed [Transport endpoint is not connected]
[2016-11-25 12:54:16.331762] E [MSGID: 114031] [client-rpc-fops.c:1624:client3_3_inodelk_cbk] 0-vmvol-client-2: remote operation failed [Transport endpoint is not connected]
[2016-11-25 12:54:16.331768] W [inode.c:1814:inode_table_destroy] (-->/usr/lib/x86_64-linux-gnu/libgfapi.so.0(glfs_fini+0x3cf) [0x7f4dd57ef49f] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_table_destroy_all+0x51) [0x7f4dceebe4d1] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_table_destroy+0xd4) [0x7f4dceebe3d4] ) 0-gfapi: Active inode(0x7f4db83101a4) with refcount(1) found during cleanup
[2016-11-25 12:54:16.331834] E [inode.c:468:__inode_unref] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.17/xlator/cluster/replicate.so(afr_local_cleanup+0x128) [0x7f4dbb5dafd8] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x21) [0x7f4dceebc7d1] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x4a0b6) [0x7f4dceebc0b6] ) 0-: Assertion failed: inode->ref
[2016-11-25 12:54:16.331876] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-gfapi: size=84 max=2 total=2
[2016-11-25 12:54:16.331924] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-gfapi: size=156 max=3 total=3
[2016-11-25 12:54:16.332037] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-gfapi: size=108 max=2 total=2
[2016-11-25 12:54:16.332046] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-client-0: size=1300 max=2 total=3038
[2016-11-25 12:54:16.332054] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-client-1: size=1300 max=2 total=3039
[2016-11-25 12:54:16.332060] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-client-2: size=1300 max=3 total=3038
[2016-11-25 12:54:16.332068] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-replicate-0: size=10556 max=4 total=3038
[2016-11-25 12:54:16.332107] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-dht: size=1148 max=0 total=0
[2016-11-25 12:54:16.332141] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-dht: size=2316 max=2 total=6
[2016-11-25 12:54:16.332200] I [MSGID: 101053] [mem-pool.c:636:mem_pool_destroy] 0-vmvol-readdir-ahead: size=60 max=0 total=0
[2016-11-25 12:54:16.332212] I [io-stats.c:2951:fini] 0-vmvol: io-stats translator unloaded
[2016-11-25 12:54:16.332245] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 1
[2016-11-25 12:54:16.332259] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 2
qemu-system-x86_64: -drive file=gluster://node1:24007/vmvol/67/disk.1,if=none,id=drive-ide0-0-0,format=qcow2,cache=none: could not open disk image gluster://node1:24007/vmvol/67/disk.1: Could not read L1 table: Bad file descriptor
2016-11-25 12:54:17.199+0000: shutting down



It looks like something we ran into: https://bugs.launchpad.net/qemu/+bug/1644754
If you created the disk not directly with qemus glusterfs backend but access it with it (like your error says) it could be a file size alignment problem.
Try using `truncate -s ...` to increase the size to the next 512 byte boundary, if that fixes your problem it should be that bug.

cheers,
Thomas






reply via email to

[Prev in Thread] Current Thread [Next in Thread]