qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH] macio: fix overflow in lba to offset


From: Mark Cave-Ayland
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH] macio: fix overflow in lba to offset conversion for ATAPI devices
Date: Mon, 4 Jan 2016 20:54:21 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.5.0

On 04/01/16 20:36, John Snow wrote:

> On 01/04/2016 02:15 PM, Mark Cave-Ayland wrote:
>> On 04/01/16 19:04, P J P wrote:
>>
>>> +-- On Mon, 4 Jan 2016, Mark Cave-Ayland wrote --+
>>> |      /* Calculate current offset */
>>> | -    offset = (int64_t)(s->lba << 11) + s->io_buffer_index;
>>> | +    offset = ((int64_t)(s->lba) << 11) + s->io_buffer_index;
>>>
>>> Maybe ((int64_t)s->lba << 11) ? No parenthesis around s->lba.
>>
>> Yes that works here too (perhaps I was just being over-cautious).
>> Alex/John, please let me know if you want me to resubmit.
>>
> 
> PJP's version should work just fine. I won't ask you to resubmit, though...

Great, thanks :)

> ...But, well, while we're here, I have a question for you:
> 
> So s->lba is an int that we left shift by 11 for a max of (2^43 - 2^11)
> then we add it against s->io_buffer_index, a uint64_t, so this statement
> could still in theory overflow.
> 
> Except not really, since io_buffer_index is bounded (in general) by
> io_buffer_total_len, which is usually (IDE_DMA_BUF_SECTORS*512 + 4) ->
> ~132K.
> 
> I don't think there's any rigorous bounds-checking of io_buffer_index,
> just ad-hoc checking when we're good enough to remember to do it. And we
> don't seem to do it anywhere in macio. Is it worth peppering in an
> assert somewhere that io_buffer_index is reasonably small?

The DBDMA engine is limited to 16-bit transfers so the maximum transfer
size is 64K, and s->io_buffer_index is used to hold the current position
within this transfer so unless we get some very large disks I think we
should be okay here?


ATB,

Mark.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]