qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GSoC/Outreachy QEMU project proposal] Measure and Analyze QEMU Perf


From: Wainer dos Santos Moschetta
Subject: Re: [GSoC/Outreachy QEMU project proposal] Measure and Analyze QEMU Performance
Date: Mon, 27 Jan 2020 16:42:04 -0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0


On 1/21/20 12:07 PM, Aleksandar Markovic wrote:
On Mon, Jan 20, 2020 at 3:51 PM Stefan Hajnoczi <address@hidden> wrote:
On Sat, Jan 18, 2020 at 03:08:37PM +0100, Aleksandar Markovic wrote:
3) The community will be given all devised performance measurement methods in 
the form of easily reproducible step-by-step setup and execution procedures.
Tracking performance is a good idea and something that has not been done
upstream yet.
Thanks for the interest, Stefan!

  A few questions:

  * Will benchmarks be run automatically (e.g. nightly or weekly) on
    someone's hardware or does every TCG architecture maintainer need to
    run them manually for themselves?
If the community wants it, definitely yes. Once the methodology is
developed, it should be straightforward to setup nightly and/or weekly
benchmarks - that could definitely include sending mails with reports
to the entire list or just individuals or subgroups. The recipient
choice is just a matter or having decent criteria about
appropriateness of information within the message (e.g. not to flood
the list with the data most people are not really interested).

For linux-user tests, they are typically very quick, and nightly tests
are quite feasible to run. On someone hardware, of course, and
consistently always on the same hardware, if possible. If it makes
sense, one could setup multiple test beds with a variety of hardware
setups.

For system mode tests, I knoe they are much more difficult to
automate, and, on top of that, there could be greater risk of
hangs/crashes Also, considering the number of machines we support,
those tests could consume much more time - perhaps even one day would
not be sufficient, if we have many machines and boot/shutdown
variants. For these reason, perhaps weekly executions would be more
appropriate for them, and, in general, given greater complexity, the
expectation from system-mode performance tests should be better kept
quite low for now.

  * Where will the benchmark result history be stored?

If emailing is set up, the results could be reconstructed from emails.
But, yes, it would be better if the result history is kept somewhere
on an internet-connected file server


If you eventually choose Gitlab CI for weekly/nightly executions then results can be simply archived [1].

Also it can be attached machines in Gitlab CI then running the system-mode experiment always on same environment.

[1] https://docs.gitlab.com/ee/user/project/pipelines/job_artifacts.html

IMHO, it is a very good GSoC proposal.

- Wainer


Yours,
Aleksandar

Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]