qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 0/9] tests: run python tests under the build/tests/venv e


From: John Snow
Subject: Re: [RFC PATCH 0/9] tests: run python tests under the build/tests/venv environment
Date: Fri, 13 May 2022 11:25:22 -0400



On Fri, May 13, 2022, 4:35 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
On Thu, May 12, 2022 at 08:06:00PM -0400, John Snow wrote:
> RFC: This is a very early, crude attempt at switching over to an
> external Python package dependency for QMP. This series does not
> actually make the switch in and of itself, but instead just switches to
> the paradigm of using a venv in general to install the QEMU python
> packages instead of using PYTHONPATH to load them from the source tree.
>
> (By installing the package, we can process dependencies.)
>
> I'm sending it to the list so I can show you some of what's ugly so far
> and my notes on how I might make it less ugly.
>
> (1) This doesn't trigger venv creation *from* iotests, it merely prints
> a friendly error message if "make check-venv" has not been run
> first. Not the greatest.

So if we run the sequence

  mkdir build
  cd build
  ../configure
  make
  ./tests/qemu-iotests/check 001

It won't work anymore, until we 'make check-venv' (or simply
'make check') ?

In this RFC as-is, that's correct. I want to fix that, because I dislike it too.

Several ways to go about that.

I'm somewhat inclined to say that venv should be created
unconditionally by default. ie a plain 'make' should always
everything needed to be able to invoke the tests directly.

I'm leaning to agree with you, but I see Kevin has some doubts. My #1 goal for Python refactoring is usually minimizing interruption to the block maintainers. I do like the idea of just having it always available and always taken care of, though.

(This would be useful for making sure that any python scripts or utilities that need access to qmp/machine can be made to work, too. We can discuss this problem a little later - the scripts/qmp/ folder needs some work. It will come up in the full series to make the switch.)

OTOH, A concern about unconditionally building the test venv is that it might introduce new dependencies for lots of downstreams that don't even run the tests yet. I think I am partial to having it install on-demand, because then the dependencies are opt-in. mjt told me that Debian does not run make check as part of its build yet, for example.

I guess I can see it working either way. I think in the very immediate term I'm motivated to have it be on-demand, but long term I think "as part of make" is the eventual goal.


> (2) This isn't acceptable for SRPM builds, because it uses PyPI to fetch
> packages just-in-time. My thought is to use an environment variable like
> QEMU_CHECK_NO_INTERNET that changes the behavior of the venv setup
> process. We can use "--system-site-packages" as an argument to venv
> creation and "--no-index" as an argument to pip installation to achieve
> good behavior in SRPM building scenarios. It'd be up to the spec-writer
> to opt into that behavior.

I think I'd expect --system-site-packages to be the default behaviour.
We expect QEMU to be compatible with the packages available in the
distros that we're targetting. So if the dev has the python packages
installed from their distro, we should be using them preferentially.

This is similar to how we bundle slirp/capstone/etc, but will
preferentially use the distro version if it is available.

If you think that behavior should apply to tests as well, then OK. I shied away from having it as the default because it's somewhat unusual to "cede control" in a venv like this - the mere presence of certain packages in the system environment may change behavior of certain python libraries. It is a less well defined environment inherently.

I'll do some testing and I can try having it always do this. I'm curious about cases where I might require "exactly mypy 0.780" and the user has mypy 0.770 installed, or maybe even the other way around.

It may be surprising as to when the system packages get used and when they don't - instinctively I like things that are less dynamic, but I see the argument for wanting to prefer system packages when possible. At least for the sake of downstream.

(I kind of feel like upstream should likewise prefer the upstream python packages too, but ... You've got a lot more packaging experience than me, so I'm willing to trust you on this point, but I'm personally a little uncertain.)


> (3) Using one venv for *all* tests means that avocado comes as a pre-req
> for iotests -- which adds avocado as a BuildRequires for the Fedora
> SRPM. That's probably not ideal. It may be better to model the test venv
> as something that can be created in stages: the "core" venv first, and
> the avocado packages only when needed.
>
> You can see in these patches that I wasn't really sure how to tie the
> check-venv step as a dependency of 'check' or 'check-block', and it
> winds up feeling kind of hacky and fragile as a result.

See above, I'm inclined to say the venv should be created unconditionally

> (Patches 6 and 7 feel particularly fishy.)
>
> What I think I would like to do is replace the makefile logic with a
> Python bootstrapping script. This will allow me to add in environment
> variable logic to accommodate #2 pretty easily. It will also allow
> iotests to call into the bootstrap script whenever it detects the venv
> isn't set up, which it needed to do anyway in order to print a
> user-friendly error message. Lastly, it will make it easier to create a
> "tiered" venv that layers in the avocado dependencies only as-needed,
> which avoids us having to bloat the SRPM build dependencies.

The tests is an area where we still have too much taking place in
Makefiles, as opposed to meson. Can we put a rule in
tests/meson.build to trigger the ven creation ? Gets us closer to
being able to run ninja without using make as a wrapper.

Paolo has written a lot about this now, and he had some suggestions on patches 6-8. I'll experiment with that and see if it feels less fragile.


> In the end, I think that approach will:
>
> - Allow us to run iotests without having to run a manual prep step
> - Keep additional SRPM deps to a minimum
> - Keep makefile hacks to a minimum
>
> The only downside I am really frowning at is that I will have to
> replicate some "update the venv if it's outdated" logic that is usually
> handled by the Make system in the venv bootstrapper. Still, I think it's
> probably the only way to hit all of the requirements here without trying
> to concoct a fairly complex Makefile.

The only reason we need to update the venv is if a python dependancy
changes right ? If we're using system packages by default that's
a non-issue. If we're using the python-qemu.qmp as a git submodule,
we presumably only need to re-create the venv if we see that the
git submodule hash has changed. IOW, we don't need to worry about
tracking whether individual python deps are outdated.

The venv should probably not need to be updated very often, but it may happen occasionally.

If tests/requirements.txt changes it should be updated, and if python/setup.cfg|py changes it *might* need to be updated. (e.g. new or removed subpackages, dependency updates, etc. An obvious one coming up is the removal of qemu.qmp from in-tree and having that dependency be added to setup.cfg.)

Using the editable installation mode, we won't need to reinstall the venv if you edit any of the in-tree python modules (e.g. you add some debugging prints to machine.py)

Even if we use system packages, we need to check that the version requirements are fulfilled which involves at least re-running pip (not necessarily recreating the whole venv) and allowing it the chance to fetch new deps.

I have no plans to use git submodules.

With regards,
Daniel

Thanks! I appreciate the feedback.

--js

reply via email to

[Prev in Thread] Current Thread [Next in Thread]