[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH] acpi: Fix access to PM1 control and status registers
From: |
Michael S. Tsirkin |
Subject: |
Re: [PATCH] acpi: Fix access to PM1 control and status registers |
Date: |
Thu, 23 Jul 2020 08:46:45 -0400 |
On Thu, Jul 16, 2020 at 11:05:06AM +0200, Cédric Le Goater wrote:
> On 7/2/20 1:12 PM, Michael S. Tsirkin wrote:
> > On Wed, Jul 01, 2020 at 01:48:36PM +0100, Anthony PERARD wrote:
> >> On Wed, Jul 01, 2020 at 08:01:55AM -0400, Michael S. Tsirkin wrote:
> >>> On Wed, Jul 01, 2020 at 12:05:49PM +0100, Anthony PERARD wrote:
> >>>> The ACPI spec state that "Accesses to PM1 control registers are
> >>>> accessed through byte and word accesses." (In section 4.7.3.2.1 PM1
> >>>> Control Registers of my old spec copy rev 4.0a).
> >>>>
> >>>> With commit 5d971f9e6725 ("memory: Revert "memory: accept mismatching
> >>>> sizes in memory_region_access_valid""), it wasn't possible anymore to
> >>>> access the pm1_cnt register by reading a single byte, and that is use
> >>>> by at least a Xen firmware called "hvmloader".
> >>>>
> >>>> Also, take care of the PM1 Status Registers which also have "Accesses
> >>>> to the PM1 status registers are done through byte or word accesses"
> >>>> (In section 4.7.3.1.1 PM1 Status Registers).
> >>>>
> >>>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> >>>
> >>>
> >>> Can't we set impl.min_access_size to convert byte accesses
> >>> to word accesses?
> >>
> >> I actually tried, but when reading `addr` or `addr+1` I had the same
> >> value. So I guess `addr` wasn't taken into account.
> >>
> >> I've checked again, with `.impl.min_access_size = 2`, the width that the
> >> function acpi_pm_cnt_read() get is 2, but addr isn't changed so the
> >> function is still supposed to shift the result (or the value to write)
> >> based on addr, I guess.
> >
> > True address is misaligned. I think memory core should just align it -
> > this is what devices seem to expect.
> > However result is shifted properly so just align addr and be done with
> > it.
> >
> >
> > In fact I have a couple more questions. Paolo - maybe you can answer some
> > of these?
> >
> >
> >
> > if (!access_size_min) {
> > access_size_min = 1;
> > }
> > if (!access_size_max) {
> > access_size_max = 4;
> > }
> >
> >>>>>
> >
> > So 8 byte accesses are split up unless one requests 8 bytes.
> > Undocumented right? Why are we doing this?
> >
> >>>>>
> >
> >
> > /* FIXME: support unaligned access? */
> >
> >>>>>
> >
> > Shouldn't we document impl.unaligned is ignored right now?
> > Shouldn't we do something to make sure callbacks do not get
> > unaligned accesses they don't expect?
> >
> >
> > In fact, there are just 2 devices which set valid.unaligned but
> > not impl.unaligned:
> > aspeed_smc_ops
> > raven_io_ops
> >
> >
> > Is this intentional?
>
> I think it is a leftover from the initial implementation. The model works
> fine
> without valid.unaligned being set and with your patch.
>
> C.
Oh good, we can drop this. What about raven? Hervé could you comment pls?
>
> > Do these in fact expect memory core to
> > provide aligned addresses to the callbacks?
> > Given impl.unaligned is not implemented, can we drop it completely?
> > Cc a bunch of people who might know.
> >
> > Can relevant maintainers please comment? Thanks a lot!
> >
> >>>>>
> >
> >
> > access_size = MAX(MIN(size, access_size_max), access_size_min);
> > access_mask = MAKE_64BIT_MASK(0, access_size * 8);
> >
> >>>>>
> >
> >
> > So with a 1 byte access at address 1, with impl.min_access_size = 2, we get:
> > access_size = 2
> > access_mask = 0xffff
> > addr = 1
> >
> >
> >
> > <<<<
> >
> >
> > if (memory_region_big_endian(mr)) {
> > for (i = 0; i < size; i += access_size) {
> > r |= access_fn(mr, addr + i, value, access_size,
> > (size - access_size - i) * 8, access_mask, attrs);
> >
> >>>>
> >
> > now shift is -8.
> >
> > <<<<
> >
> >
> > }
> > } else {
> > for (i = 0; i < size; i += access_size) {
> > r |= access_fn(mr, addr + i, value, access_size, i * 8,
> > access_mask, attrs);
> > }
> > }
> >
> >
> > <<<<
> >
> > callback is invoked with addr 1 and size 2:
> >
> >>>>>
> >
> >
> > uint64_t tmp;
> >
> > tmp = mr->ops->read(mr->opaque, addr, size);
> > if (mr->subpage) {
> > trace_memory_region_subpage_read(get_cpu_index(), mr, addr, tmp,
> > size);
> > } else if
> > (trace_event_get_state_backends(TRACE_MEMORY_REGION_OPS_READ)) {
> > hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr);
> > trace_memory_region_ops_read(get_cpu_index(), mr, abs_addr, tmp,
> > size);
> > }
> > memory_region_shift_read_access(value, shift, mask, tmp);
> > return MEMTX_OK;
> >
> > <<<<
> >
> > let's assume callback returned 0xabcd
> >
> > this is where we are shifting the return value:
> >
> >>>>>
> >
> >
> > static inline void memory_region_shift_read_access(uint64_t *value,
> > signed shift,
> > uint64_t mask,
> > uint64_t tmp)
> > {
> > if (shift >= 0) {
> > *value |= (tmp & mask) << shift;
> > } else {
> > *value |= (tmp & mask) >> -shift;
> > }
> > }
> >
> >
> > So we do 0xabcd & 0xffff >> 8, and we get 0xab.
> >
> >>>>
> >
> > How about aligning address for now? Paolo?
> >
> > -->
> >
> > memory: align to min access size
> >
> > If impl.min_access_size > valid.min_access_size access callbacks
> > can get a misaligned access as size is increased.
> > They don't expect that, let's fix it in the memory core.
> >
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >
> > ---
> >
> >
> > diff --git a/memory.c b/memory.c
> > index 9200b20130..ea489ce405 100644
> > --- a/memory.c
> > +++ b/memory.c
> > @@ -532,6 +532,7 @@ static MemTxResult access_with_adjusted_size(hwaddr
> > addr,
> > }
> >
> > /* FIXME: support unaligned access? */
> > + addr &= ~(access_size_min - 1);
> > access_size = MAX(MIN(size, access_size_max), access_size_min);
> > access_mask = MAKE_64BIT_MASK(0, access_size * 8);
> > if (memory_region_big_endian(mr)) {
> >> --
> >> Anthony PERARD
> >