|
From: | Cédric Le Goater |
Subject: | Re: [RFC PATCH 0/4] hw/i2c: i2c slave mode support |
Date: | Fri, 6 May 2022 18:49:37 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 |
Hello Jonathan, On 5/6/22 16:07, Jonathan Cameron wrote:
On Thu, 31 Mar 2022 18:57:33 +0200 Klaus Jensen <its@irrelevant.dk> wrote:From: Klaus Jensen <k.jensen@samsung.com> Hi all, This RFC series adds I2C "slave mode" support for the Aspeed I2C controller as well as the necessary infrastructure in the i2c core to support this. Background ~~~~~~~~~~ We are working on an emulated NVM Express Management Interface[1] for testing and validation purposes. NVMe-MI is based on the MCTP protocol[2] which may use a variety of underlying transports. The one we are interested in is I2C[3]. The first general trickery here is that all MCTP transactions are based on the SMBus Block Write bus protocol[4]. This means that the slave must be able to master the bus to communicate. As you know, hw/i2c/core.c currently does not support this use case. The second issue is how to interact with these mastering devices. Jeremy and Matt (CC'ed) have been working on an MCTP stack for the Linux Kernel (already upstream) and an I2C binding driver[5] is currently under review. This binding driver relies on I2C slave mode support in the I2C controller.Hi Klaus, Just thought I'd mention I'm also interested in MCTP over I2C emulation for a couple of projects:
Klaus is working on a v2 : 20220503225925.1798324-2-pdel@fb.com/">http://patchwork.ozlabs.org/project/qemu-devel/patch/20220503225925.1798324-2-pdel@fb.com/ Thanks, C.
1) DMTF SPDM - mostly as a second transport for the kernel stack alongside PCI DOE. 2) CXL FM-API - adding support for the Fabric Manager interfaces on emulated CXL switches which is also typically carried over MCTP. I was thinking of emulating a MCTP over PCI VDM but this has saved me going to the effort of doing that for now at least :) I have hacked a really really basic MCTP device together using this series and it all seems to be working with the kernel stack (subject to a few kernel driver bugs that I'll report / send fixes for next week). I'm cheating all over the place so far, (lots of hard coded values) but would be interested in a more flexible solution that might perhaps share infrastructure with your NVMe-MI work.
[Prev in Thread] | Current Thread | [Next in Thread] |