[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Verita
From: |
Daniel P. Berrange |
Subject: |
Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support |
Date: |
Mon, 15 Aug 2016 11:20:46 +0100 |
User-agent: |
Mutt/1.6.2 (2016-07-01) |
On Sat, Aug 13, 2016 at 09:35:12PM -0700, Ashish Mittal wrote:
> This patch adds support for a new block device type called "vxhs".
> Source code for the library that this code loads can be downloaded from:
> https://github.com/MittalAshish/libqnio.git
>
> Sample command line with a vxhs block device specification:
> ./qemu-system-x86_64 -name instance-00000008 -S -vnc 0.0.0.0:0 -k en-us -vga
> cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg
> timestamp=on
> 'json:{"driver":"vxhs","vdisk_id":"{c3e9095a-a5ee-4dce-afeb-2a59fb387410}","server":[{"host":"172.172.17.4","port":"9999"},{"host":"172.172.17.2","port":"9999"}]}'
>
> Signed-off-by: Ashish Mittal <address@hidden>
> ---
> v3 changelog:
> =============
> (1) Implemented QAPI interface for passing VxHS block device parameters.
>
> Sample trace o/p after filtering out libqnio debug messages:
>
> ./qemu-system-x86_64 -trace enable=vxhs* -name instance-00000008 -S -vnc
> 0.0.0.0:0 -k en-us -vga cirrus -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
> 'json:{"driver":"vxhs","vdisk_id":"/{c3e9095a-a5ee-4dce-afeb-2a59fb387410}","server":[{"host":"172.172.17.4","port":"9999"},{"host":"172.172.17.2","port":"9999"}]}'
> address@hidden:vxhs_parse_json vdisk_id from json
> /{c3e9095a-a5ee-4dce-afeb-2a59fb387410}
> address@hidden:vxhs_parse_json_numservers Number of servers passed = 2
> address@hidden:vxhs_parse_json_hostinfo Host 1: IP 172.172.17.4, Port 9999
> address@hidden:vxhs_parse_json_hostinfo Host 2: IP 172.172.17.2, Port 9999
> address@hidden:vxhs_open vxhs_vxhs_open: came in to open file = (null)
> address@hidden:vxhs_setup_qnio Context to HyperScale IO manager =
> 0x7f0996fd3920
> address@hidden:vxhs_open_device Adding host 172.172.17.4:9999 to BDRVVXHSState
> address@hidden:vxhs_open_device Adding host 172.172.17.2:9999 to BDRVVXHSState
> address@hidden:vxhs_get_vdisk_stat vDisk
> /{c3e9095a-a5ee-4dce-afeb-2a59fb387410} stat ioctl returned size 429
> ...
>
> TODO: Fixes to issues raised by review comments from Stefan and Kevin.
>
> block/Makefile.objs | 2 +
> block/trace-events | 46 ++
> block/vxhs.c | 1424
> ++++++++++++++++++++++++++++++++++++++++++++++++++
> block/vxhs.h | 293 +++++++++++
> configure | 50 ++
> qapi/block-core.json | 22 +-
> 6 files changed, 1835 insertions(+), 2 deletions(-)
> create mode 100644 block/vxhs.c
> create mode 100644 block/vxhs.h
> diff --git a/block/vxhs.c b/block/vxhs.c
> new file mode 100644
> index 0000000..3960ae5
> --- /dev/null
> +++ b/block/vxhs.c
> +
> +static QemuOptsList runtime_opts = {
> + .name = "vxhs",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
> + .desc = {
> + {
> + .name = VXHS_OPT_FILENAME,
> + .type = QEMU_OPT_STRING,
> + .help = "URI to the Veritas HyperScale image",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +static QemuOptsList runtime_tcp_opts = {
> + .name = "vxhs_tcp",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
> + .desc = {
> + {
> + .name = VXHS_OPT_HOST,
> + .type = QEMU_OPT_STRING,
> + .help = "host address (ipv4 addresses)",
> + },
> + {
> + .name = VXHS_OPT_PORT,
> + .type = QEMU_OPT_NUMBER,
> + .help = "port number on which VxHSD is listening (default 9999)",
> + },
> + {
> + .name = "to",
> + .type = QEMU_OPT_NUMBER,
> + .help = "max port number, not supported by VxHS",
> + },
> + {
> + .name = "ipv4",
> + .type = QEMU_OPT_BOOL,
> + .help = "ipv4 bool value, not supported by VxHS",
> + },
> + {
> + .name = "ipv6",
> + .type = QEMU_OPT_BOOL,
> + .help = "ipv6 bool value, not supported by VxHS",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +static QemuOptsList runtime_json_opts = {
> + .name = "vxhs_json",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
> + .desc = {
> + {
> + .name = VXHS_OPT_VDISK_ID,
> + .type = QEMU_OPT_STRING,
> + .help = "UUID of the VxHS vdisk",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +
> +/*
> + * Convert the json formatted command line into qapi.
> +*/
> +
> +static int vxhs_parse_json(BlockdevOptionsVxHS *conf,
> + QDict *options, Error **errp)
> +{
> + QemuOpts *opts;
> + InetSocketAddress *vxhsconf;
> + InetSocketAddressList *curr = NULL;
> + QDict *backing_options = NULL;
> + Error *local_err = NULL;
> + char *str = NULL;
> + const char *ptr = NULL;
> + size_t num_servers;
> + int i;
> +
> + /* create opts info from runtime_json_opts list */
> + opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, VXHS_OPT_VDISK_ID);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, VXHS_OPT_VDISK_ID);
> + goto out;
> + }
> + conf->vdisk_id = g_strdup(ptr);
> + trace_vxhs_parse_json(ptr);
> +
> + num_servers = qdict_array_entries(options, VXHS_OPT_SERVER);
> + if (num_servers < 1) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, "server");
> + goto out;
> + }
> + trace_vxhs_parse_json_numservers(num_servers);
> + qemu_opts_del(opts);
> +
> + for (i = 0; i < num_servers; i++) {
> + str = g_strdup_printf(VXHS_OPT_SERVER_PATTERN"%d.", i);
> + qdict_extract_subqdict(options, &backing_options, str);
> +
> + /* create opts info from runtime_tcp_opts list */
> + opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0,
> &error_abort);
> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + vxhsconf = g_new0(InetSocketAddress, 1);
> + ptr = qemu_opt_get(opts, VXHS_OPT_HOST);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER,
> + VXHS_OPT_HOST);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + vxhsconf->host = g_strdup(ptr);
> +
> + ptr = qemu_opt_get(opts, VXHS_OPT_PORT);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER,
> + VXHS_OPT_PORT);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + vxhsconf->port = g_strdup(ptr);
> +
> + /* defend for unsupported fields in InetSocketAddress,
> + * i.e. @ipv4, @ipv6 and @to
> + */
> + ptr = qemu_opt_get(opts, VXHS_OPT_TO);
> + if (ptr) {
> + vxhsconf->has_to = true;
> + }
> + ptr = qemu_opt_get(opts, VXHS_OPT_IPV4);
> + if (ptr) {
> + vxhsconf->has_ipv4 = true;
> + }
> + ptr = qemu_opt_get(opts, VXHS_OPT_IPV6);
> + if (ptr) {
> + vxhsconf->has_ipv6 = true;
> + }
> + if (vxhsconf->has_to) {
> + error_setg(&local_err, "Parameter 'to' not supported");
> + goto out;
> + }
> + if (vxhsconf->has_ipv4 || vxhsconf->has_ipv6) {
> + error_setg(&local_err, "Parameters 'ipv4/ipv6' not
> supported");
> + goto out;
> + }
> + trace_vxhs_parse_json_hostinfo(i+1, vxhsconf->host,
> vxhsconf->port);
> +
> + if (conf->server == NULL) {
> + conf->server = g_new0(InetSocketAddressList, 1);
> + conf->server->value = vxhsconf;
> + curr = conf->server;
> + } else {
> + curr->next = g_new0(InetSocketAddressList, 1);
> + curr->next->value = vxhsconf;
> + curr = curr->next;
> + }
> +
> + qdict_del(backing_options, str);
> + qemu_opts_del(opts);
> + g_free(str);
> + str = NULL;
> + }
> +
> + return 0;
> +
> +out:
> + error_propagate(errp, local_err);
> + qemu_opts_del(opts);
> + if (str) {
> + qdict_del(backing_options, str);
> + g_free(str);
> + }
> + errno = EINVAL;
> + return -errno;
> +}
Ewww this is really horrible code. It is open-coding a special purpose
conversion of QemuOpts -> QDict -> QAPI scheme. We should really put
my qdict_crumple() API impl as a pre-requisite of this, so you can then
use qdict_crumple + qmp_input_visitor to do this conversion in a generic
manner removing all this code.
https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg03118.html
> +/*
> + * vxhs_parse_uri: Parse the incoming URI and populate *conf with the
> + * vdisk_id, and all the host(s) information. Host at index 0 is local
> storage
> + * agent and the rest, reflection target storage agents. The local storage
> + * agent ip is the efficient internal address in the uri, e.g. 192.168.0.2.
> + * The local storage agent address is stored at index 0. The reflection
> target
> + * ips, are the E-W data network addresses of the reflection node agents,
> also
> + * extracted from the uri.
> + */
> +static int vxhs_parse_uri(BlockdevOptionsVxHS *conf,
> + const char *filename)
Delete this method entirely. We should not be adding URI syntax for any new
block driver. The QAPI schema syntax is all we need.
> +
> +static void *qemu_vxhs_init(BlockdevOptionsVxHS *conf,
> + const char *filename,
> + QDict *options, Error **errp)
> +{
> + int ret;
> +
> + if (filename) {
> + ret = vxhs_parse_uri(conf, filename);
> + if (ret < 0) {
> + error_setg(errp, "invalid URI");
> + error_append_hint(errp, "Usage: file=vxhs://"
> + "[host[:port]]/{VDISK_UUID}\n");
> + errno = -ret;
> + return NULL;
> + }
> + } else {
> + ret = vxhs_parse_json(conf, options, errp);
> + if (ret < 0) {
> + error_append_hint(errp, "Usage: "
> + "json:{\"driver\":\"vxhs\","
> + "\"vdisk_id\":\"{VDISK_UUID}\","
> + "\"server\":[{\"host\":\"1.2.3.4\","
> + "\"port\":\"9999\"}"
> + ",{\"host\":\"4.5.6.7\",\"port\":\"9999\"}]}"
> + "\n");
> + errno = -ret;
> + return NULL;
> + }
> +
> + }
> +
> + return NULL;
> +}
> +
> +int vxhs_open_device(BlockdevOptionsVxHS *conf, int *cfd, int *rfd,
> + BDRVVXHSState *s)
> +{
> + InetSocketAddressList *curr = NULL;
> + char *file_name;
> + char *of_vsa_addr;
> + int ret = 0;
> + int i = 0;
> +
> + pthread_mutex_lock(&of_global_ctx_lock);
> + if (global_qnio_ctx == NULL) {
> + global_qnio_ctx = vxhs_setup_qnio();
> + if (global_qnio_ctx == NULL) {
> + pthread_mutex_unlock(&of_global_ctx_lock);
> + return -1;
> + }
> + }
> + pthread_mutex_unlock(&of_global_ctx_lock);
> +
> + curr = conf->server;
> + s->vdisk_guid = g_strdup(conf->vdisk_id);
> +
> + for (i = 0; curr != NULL; i++) {
> + s->vdisk_hostinfo[i].hostip = g_strdup(curr->value->host);
> + s->vdisk_hostinfo[i].port = g_ascii_strtoll(curr->value->port, NULL,
> 0);
> +
> + s->vdisk_hostinfo[i].qnio_cfd = -1;
> + s->vdisk_hostinfo[i].vdisk_rfd = -1;
> + trace_vxhs_open_device(curr->value->host, curr->value->port);
> + curr = curr->next;
> + }
> + s->vdisk_nhosts = i;
> + s->vdisk_cur_host_idx = 0;
> +
> +
> + *cfd = -1;
> + of_vsa_addr = g_new0(char, OF_MAX_SERVER_ADDR);
> + file_name = g_new0(char, OF_MAX_FILE_LEN);
> + snprintf(file_name, OF_MAX_FILE_LEN, "%s%s", vdisk_prefix,
> s->vdisk_guid);
> + snprintf(of_vsa_addr, OF_MAX_SERVER_ADDR, "of://%s:%d",
> + s->vdisk_hostinfo[s->vdisk_cur_host_idx].hostip,
> + s->vdisk_hostinfo[s->vdisk_cur_host_idx].port);
*Never* use g_new + snprintf, particularly not with a fixed length
buffer. g_strdup_printf() is the right way.
> +
> + *cfd = qemu_open_iio_conn(global_qnio_ctx, of_vsa_addr, 0);
> + if (*cfd < 0) {
> + trace_vxhs_open_device_qnio(of_vsa_addr);
> + ret = -EIO;
> + goto out;
> + }
> + *rfd = qemu_iio_devopen(global_qnio_ctx, *cfd, file_name, 0);
> + s->aio_context = qemu_get_aio_context();
> +
> +out:
> + if (file_name != NULL) {
> + g_free(file_name);
> + }
> + if (of_vsa_addr != NULL) {
> + g_free(of_vsa_addr);
> + }
Useless if-before-free - g_free happily accepts NULL so don't check
for it yourself.
> +
> + return ret;
> +}
> +
> +int vxhs_create(const char *filename,
> + QemuOpts *options, Error **errp)
> +{
> + int ret = 0;
> + int qemu_cfd = 0;
> + int qemu_rfd = 0;
> + BDRVVXHSState s;
> + BlockdevOptionsVxHS *conf = NULL;
> +
> + conf = g_new0(BlockdevOptionsVxHS, 1);
> + trace_vxhs_create(filename);
> + qemu_vxhs_init(conf, filename, NULL, errp);
> + ret = vxhs_open_device(conf, &qemu_cfd, &qemu_rfd, &s);
> +
> + qapi_free_BlockdevOptionsVxHS(conf);
> + return ret;
> +}
> +
> +int vxhs_open(BlockDriverState *bs, QDict *options,
> + int bdrv_flags, Error **errp)
> +{
> + BDRVVXHSState *s = bs->opaque;
> + int ret = 0;
> + int qemu_qnio_cfd = 0;
> + int qemu_rfd = 0;
> + QemuOpts *opts;
> + Error *local_err = NULL;
> + const char *vxhs_uri;
> + BlockdevOptionsVxHS *conf = NULL;
> +
> + opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, options, &local_err);
> + if (local_err) {
> + error_propagate(errp, local_err);
> + ret = -EINVAL;
> + goto out;
> + }
> + vxhs_uri = qemu_opt_get(opts, VXHS_OPT_FILENAME);
> +
> + conf = g_new0(BlockdevOptionsVxHS, 1);
> +
> + qemu_vxhs_init(conf, vxhs_uri, options, errp);
> + memset(s, 0, sizeof(*s));
> + trace_vxhs_open(vxhs_uri);
> + ret = vxhs_open_device(conf, &qemu_qnio_cfd, &qemu_rfd, s);
> + if (ret != 0) {
> + trace_vxhs_open_fail(ret);
> + return ret;
> + }
> + s->qnio_ctx = global_qnio_ctx;
> + s->vdisk_hostinfo[0].qnio_cfd = qemu_qnio_cfd;
> + s->vdisk_hostinfo[0].vdisk_rfd = qemu_rfd;
> + s->vdisk_size = 0;
> + QSIMPLEQ_INIT(&s->vdisk_aio_retryq);
> +
> + ret = qemu_pipe(s->fds);
> + if (ret < 0) {
> + trace_vxhs_open_epipe('.');
> + ret = -errno;
> + goto out;
> + }
> + fcntl(s->fds[VDISK_FD_READ], F_SETFL, O_NONBLOCK);
> +
> + aio_set_fd_handler(s->aio_context, s->fds[VDISK_FD_READ],
> + false, vxhs_aio_event_reader, NULL, s);
> +
> + /*
> + * Allocate/Initialize the spin-locks.
> + *
> + * NOTE:
> + * Since spin lock is being allocated
> + * dynamically hence moving acb struct
> + * specific lock to BDRVVXHSState
> + * struct. The reason being,
> + * we don't want the overhead of spin
> + * lock being dynamically allocated and
> + * freed for every AIO.
> + */
> + s->vdisk_lock = VXHS_SPIN_LOCK_ALLOC;
> + s->vdisk_acb_lock = VXHS_SPIN_LOCK_ALLOC;
> +
> + qapi_free_BlockdevOptionsVxHS(conf);
> + return 0;
> +
> +out:
> + if (s->vdisk_hostinfo[0].vdisk_rfd >= 0) {
> + qemu_iio_devclose(s->qnio_ctx, 0,
> + s->vdisk_hostinfo[0].vdisk_rfd);
> + }
> + /* never close qnio_cfd */
> + trace_vxhs_open_fail(ret);
> + qapi_free_BlockdevOptionsVxHS(conf);
> + return ret;
> +}
> +
> +
> +/*
> + * This is called by QEMU when a flush gets triggered from within
> + * a guest at the block layer, either for IDE or SCSI disks.
> + */
> +int vxhs_co_flush(BlockDriverState *bs)
> +{
> + BDRVVXHSState *s = bs->opaque;
> + uint64_t size = 0;
> + int ret = 0;
> + uint32_t iocount = 0;
> +
> + ret = qemu_iio_ioctl(s->qnio_ctx,
> + s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd,
> + VDISK_AIO_FLUSH, &size, NULL, IIO_FLAG_SYNC);
> +
> + if (ret < 0) {
> + /*
> + * Currently not handling the flush ioctl
> + * failure because of network connection
> + * disconnect. Since all the writes are
> + * commited into persistent storage hence
> + * this flush call is noop and we can safely
> + * return success status to the caller.
> + *
> + * If any write failure occurs for inflight
> + * write AIO because of network disconnect
> + * then anyway IO failover will be triggered.
> + */
> + trace_vxhs_co_flush(s->vdisk_guid, ret, errno);
> + ret = 0;
> + }
> +
> + iocount = vxhs_get_vdisk_iocount(s);
> + if (iocount > 0) {
> + trace_vxhs_co_flush_iocnt(iocount);
> + }
You're not doing anything with the iocount value here. Either
delete this, or do something useful with it.
> +
> + return ret;
> +}
> +
> +unsigned long vxhs_get_vdisk_stat(BDRVVXHSState *s)
> +{
> + void *ctx = NULL;
> + int flags = 0;
> + unsigned long vdisk_size = 0;
Is this meansured in bytes ? If so 'unsigned long' is a rather
limited choice for 32-bit builds. long long / int64_t
would be better.
> + int ret = 0;
> +
> + ret = qemu_iio_ioctl(s->qnio_ctx,
> + s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd,
> + VDISK_STAT, &vdisk_size, ctx, flags);
> +
> + if (ret < 0) {
> + trace_vxhs_get_vdisk_stat_err(s->vdisk_guid, ret, errno);
> + }
> +
> + trace_vxhs_get_vdisk_stat(s->vdisk_guid, vdisk_size);
> + return vdisk_size;
Presumably vdisk_size is garbage when ret < 0, so you really
need to propagate the error better.
> +}
> +
> +/*
> + * Returns the size of vDisk in bytes. This is required
> + * by QEMU block upper block layer so that it is visible
> + * to guest.
> + */
> +int64_t vxhs_getlength(BlockDriverState *bs)
> +{
> + BDRVVXHSState *s = bs->opaque;
> + unsigned long vdisk_size = 0;
> +
> + if (s->vdisk_size > 0) {
> + vdisk_size = s->vdisk_size;
> + } else {
> + /*
> + * Fetch the vDisk size using stat ioctl
> + */
> + vdisk_size = vxhs_get_vdisk_stat(s);
> + if (vdisk_size > 0) {
> + s->vdisk_size = vdisk_size;
> + }
> + }
> +
> + if (vdisk_size > 0) {
> + return (int64_t)vdisk_size; /* return size in bytes */
> + } else {
> + return -EIO;
> + }
> +}
> +
> +/*
> + * Returns actual blocks allocated for the vDisk.
> + * This is required by qemu-img utility.
> + */
> +int64_t vxhs_get_allocated_blocks(BlockDriverState *bs)
> +{
> + BDRVVXHSState *s = bs->opaque;
> + unsigned long vdisk_size = 0;
> +
> + if (s->vdisk_size > 0) {
> + vdisk_size = s->vdisk_size;
> + } else {
> + /*
> + * TODO:
> + * Once HyperScale storage-virtualizer provides
> + * actual physical allocation of blocks then
> + * fetch that information and return back to the
> + * caller but for now just get the full size.
> + */
> + vdisk_size = vxhs_get_vdisk_stat(s);
> + if (vdisk_size > 0) {
> + s->vdisk_size = vdisk_size;
> + }
> + }
> +
> + if (vdisk_size > 0) {
> + return (int64_t)vdisk_size; /* return size in bytes */
> + } else {
> + return -EIO;
> + }
> +}
> +/*
> + * Try to reopen the vDisk on one of the available hosts
> + * If vDisk reopen is successful on any of the host then
> + * check if that node is ready to accept I/O.
> + */
> +int vxhs_reopen_vdisk(BDRVVXHSState *s, int index)
> +{
> + char *of_vsa_addr = NULL;
> + char *file_name = NULL;
> + int res = 0;
> +
> + /*
> + * Don't close the channel if it was opened
> + * before successfully. It will be handled
> + * within iio* api if the same channel open
> + * fd is reused.
> + *
> + * close stale vdisk device remote fd since
> + * it is invalid after channel disconnect.
> + */
> + if (s->vdisk_hostinfo[index].vdisk_rfd >= 0) {
> + qemu_iio_devclose(s->qnio_ctx, 0,
> + s->vdisk_hostinfo[index].vdisk_rfd);
> + s->vdisk_hostinfo[index].vdisk_rfd = -1;
> + }
> + /*
> + * build storage agent address and vdisk device name strings
> + */
> + of_vsa_addr = g_new0(char, OF_MAX_SERVER_ADDR);
> + file_name = g_new0(char, OF_MAX_FILE_LEN);
> + snprintf(file_name, OF_MAX_FILE_LEN, "%s%s", vdisk_prefix,
> s->vdisk_guid);
> + snprintf(of_vsa_addr, OF_MAX_SERVER_ADDR, "of://%s:%d",
> + s->vdisk_hostinfo[index].hostip, s->vdisk_hostinfo[index].port);
Again no snprintf please.
> + /*
> + * open qnio channel to storage agent if not opened before.
> + */
> + if (s->vdisk_hostinfo[index].qnio_cfd < 0) {
> + s->vdisk_hostinfo[index].qnio_cfd =
> + qemu_open_iio_conn(global_qnio_ctx, of_vsa_addr, 0);
> + if (s->vdisk_hostinfo[index].qnio_cfd < 0) {
> + trace_vxhs_reopen_vdisk(s->vdisk_hostinfo[index].hostip);
> + res = ENODEV;
> + goto out;
> + }
> + }
> + /*
> + * open vdisk device
> + */
> + s->vdisk_hostinfo[index].vdisk_rfd =
> + qemu_iio_devopen(global_qnio_ctx,
> + s->vdisk_hostinfo[index].qnio_cfd, file_name, 0);
> + if (s->vdisk_hostinfo[index].vdisk_rfd < 0) {
> + trace_vxhs_reopen_vdisk_openfail(file_name);
> + res = EIO;
> + goto out;
> + }
> +out:
> + if (of_vsa_addr) {
> + g_free(of_vsa_addr);
> + }
> + if (file_name) {
> + g_free(file_name);
> + }
More useless-if-before-free
> + return res;
> +}
> +void bdrv_vxhs_init(void)
> +{
> + trace_vxhs_bdrv_init(vxhs_drv_version);
> + bdrv_register(&bdrv_vxhs);
> +}
> +
> +/*
> + * The line below is how our drivier is initialized.
> + * DO NOT TOUCH IT
> + */
This is a rather pointless comment - the function name itself makes
it obvious what this is doing.
> +block_init(bdrv_vxhs_init);
> diff --git a/block/vxhs.h b/block/vxhs.h
> new file mode 100644
> index 0000000..1b678e5
> --- /dev/null
> +++ b/block/vxhs.h
> +#define vxhsErr(fmt, ...) { \
> + time_t t = time(0); \
> + char buf[9] = {0}; \
> + strftime(buf, 9, "%H:%M:%S", localtime(&t)); \
> + fprintf(stderr, "[%s: %lu] %d: %s():\t", buf, pthread_self(), \
> + __LINE__, __func__); \
> + fprintf(stderr, fmt, ## __VA_ARGS__); \
> +}
This appears unused now, please delete it.
> diff --git a/configure b/configure
> index 8d84919..fc09dc6 100755
> --- a/configure
> +++ b/configure
> @@ -320,6 +320,7 @@ vhdx=""
> numa=""
> tcmalloc="no"
> jemalloc="no"
> +vxhs="yes"
You should set this to "", to indicate that configure should
auto-probe for support. Setting it to 'yes' indicates a mandatory
requirement which we don't want for an optional library like yours.
This would fix the automated build failure report this patch got.
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 5e2d7d7..064d620 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1693,12 +1693,13 @@
> #
> # @host_device, @host_cdrom: Since 2.1
> # @gluster: Since 2.7
> +# @vxhs: Since 2.7
> #
> # Since: 2.0
> ##
> { 'enum': 'BlockdevDriver',
> 'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
> - 'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
> + 'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom', 'vxhs',
> 'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
> 'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
> 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
> @@ -2150,6 +2151,22 @@
> 'server': ['GlusterServer'],
> '*debug-level': 'int' } }
>
> +# @BlockdevOptionsVxHS
> +#
> +# Driver specific block device options for VxHS
> +#
> +# @vdisk_id: UUID of VxHS volume
> +#
> +# @server: list of vxhs server IP, port
> +#
> +# Since: 2.7
> +##
> +{ 'struct': 'BlockdevOptionsVxHS',
> + 'data': { 'vdisk_id': 'str',
> + 'server': ['InetSocketAddress'] } }
Is there any chance that you are going to want to support UNIX domain socket
connections in the future ? If so, perhaps we could use SocketAddress instead
of InetSocketAddress to allow more flexibility in future.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
- [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, Ashish Mittal, 2016/08/14
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, no-reply, 2016/08/14
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, no-reply, 2016/08/14
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support,
Daniel P. Berrange <=
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, Kevin Wolf, 2016/08/15
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, ashish mittal, 2016/08/15
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, Paolo Bonzini, 2016/08/17
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, ashish mittal, 2016/08/17
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, ashish mittal, 2016/08/20
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, Stefan Hajnoczi, 2016/08/23
- Re: [Qemu-devel] [PATCH v3 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support, ashish mittal, 2016/08/23