[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC PATCH v3 04/11] contrib/plugins: add plugin showcasing new dico
From: |
Alex Bennée |
Subject: |
Re: [RFC PATCH v3 04/11] contrib/plugins: add plugin showcasing new dicontinuity related API |
Date: |
Sat, 11 Jan 2025 12:15:18 +0000 |
User-agent: |
mu4e 1.12.8; emacs 29.4 |
Pierrick Bouvier <pierrick.bouvier@linaro.org> writes:
> On 1/10/25 07:15, Alex Bennée wrote:
>> "Julian Ganz" <neither@nut.email> writes:
>>
>>> Hi Alex,
>>>
>>> January 9, 2025 at 3:04 PM, "Alex Bennée" wrote:
>>>> Julian Ganz <neither@nut.email> writes:
>>>>> We recently introduced new plugin API for registration of discontinuity
>>>>> related callbacks. This change introduces a minimal plugin showcasing
>>>>> the new API. It simply counts the occurances of interrupts, exceptions
>>>>> and host calls per CPU and reports the counts when exitting.
>>>>> ---
>>>>> contrib/plugins/meson.build | 3 +-
>>>>> contrib/plugins/traps.c | 96 +++++++++++++++++++++++++++++++++++++
>>>>> 2 files changed, 98 insertions(+), 1 deletion(-)
>>>>> create mode 100644 contrib/plugins/traps.c
>>>>>
>>>>> diff --git a/contrib/plugins/meson.build b/contrib/plugins/meson.build
>>>>> index 63a32c2b4f..9a3015e1c1 100644
>>>>> --- a/contrib/plugins/meson.build
>>>>> +++ b/contrib/plugins/meson.build
>>>>> @@ -1,5 +1,6 @@
>>>>> contrib_plugins = ['bbv', 'cache', 'cflow', 'drcov', 'execlog',
>>>>> 'hotblocks',
>>>>> - 'hotpages', 'howvec', 'hwprofile', 'ips', 'stoptrigger']
>>>>> + 'hotpages', 'howvec', 'hwprofile', 'ips', 'stoptrigger',
>>>>> + 'traps']
>>>>>
>>>> I wonder if this is better in tests/tcg/plugins? We need to do something
>>>> to ensure it gets covered by CI although we might want to be smarter
>>>> about running it together with a test binary that will actually pick up
>>>> something.
>>>
>>> The callback is intended as an example. The patch-series does contain a
>>> dedicated testing plugin. And iirc the contrib plugins are now built
>>> with the rest of qemu anyway?
>> They do - however we generate additional tests with
>> tests/tcg/plugins
>> with the existing multiarch linux-user and softmmu check-tcg tests. Its
>> a fairly dumb expansion though:
>> # We need to ensure expand the run-plugin-TEST-with-PLUGIN
>> # pre-requistes manually here as we can't use stems to handle it. We
>> # only expand MULTIARCH_TESTS which are common on most of our targets
>> # to avoid an exponential explosion as new tests are added. We also
>> # add some special helpers the run-plugin- rules can use below.
>> # In more, extra tests can be added using ADDITIONAL_PLUGINS_TESTS
>> variable.
>> ifneq ($(MULTIARCH_TESTS),)
>> $(foreach p,$(PLUGINS), \
>> $(foreach t,$(MULTIARCH_TESTS) $(ADDITIONAL_PLUGINS_TESTS),\
>> $(eval run-plugin-$(t)-with-$(p): $t $p) \
>> $(eval RUN_TESTS+=run-plugin-$(t)-with-$(p))))
>> endif # MULTIARCH_TESTS
>> endif # CONFIG_PLUGIN
>> We also have a hand-hacked test for validating memory
>> instrumentation:
>> # Test plugin memory access instrumentation
>> run-plugin-test-plugin-mem-access-with-libmem.so: \
>> PLUGIN_ARGS=$(COMMA)print-accesses=true
>> run-plugin-test-plugin-mem-access-with-libmem.so: \
>> CHECK_PLUGIN_OUTPUT_COMMAND= \
>> $(SRC_PATH)/tests/tcg/multiarch/check-plugin-output.sh \
>> $(QEMU) $<
>> test-plugin-mem-access: CFLAGS+=-pthread -O0
>> test-plugin-mem-access: LDFLAGS+=-pthread -O0
>> That said as I mention in the reply to the cover letter the traps
>> stuff
>> might be better exercised with the functional test so could utilise a
>> plugin built in contrib just as easily.
>>
>
> I agree, as it was discussed in previous versions, we should add a
> functional test for this. I'm not sure if we should write a custom and
> complicated test, or simply boot and shutdown an existing image, and
> call it a day.
>
> Do you have any opinion on this Alex?
An existing image based test would be fine although I'd favour one that
had multiple exception types (e.g. something with firmware and
hypervisor transitions on Arm or equivalent on other arches.)
>
>>>
>>>>> +QEMU_PLUGIN_EXPORT
>>>>> +int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
>>>>> + int argc, char **argv)
>>>>> +{
>>>>> + if (!info->system_emulation) {
>>>>> + fputs("trap plugin can only be used in system emulation mode.\n",
>>>>> + stderr);
>>>>> + return -1;
>>>>> + }
>>>>> +
>>>>> + max_vcpus = info->system.max_vcpus;
>>>>> + traps = qemu_plugin_scoreboard_new(sizeof(TrapCounters));
>>>>> + qemu_plugin_register_vcpu_init_cb(id, vcpu_init);
>>>>> + qemu_plugin_vcpu_for_each(id, vcpu_init);
>>>>>
>>>> Hmm at first glances this seems redundant - however I guess this is
>>>> covering the use case you load the plugin after the system is up and
>>>> running.
>>>
>>> Yep, but really that was just me being paranoid.
>>>
>>>> I wonder if you have unearthed a foot-gun in the API that is easy to
>>>> fall into? Maybe we should expand qemu_plugin_register_vcpu_init_cb to
>>>> call the call back immediately for existing vcpus?
>>>
>>> Would probably not hurt.
>>>
>>> Regards,
>>> Julian
>>
--
Alex Bennée
Virtualisation Tech Lead @ Linaro