Some time ago, I asked a similar question, "How to connect GPIO in QEMU-emulated machine to an object in host?" and after some work, I have found a not perfect but satisfactory solution.
However, now we have virtio that supports GPIO, and it would be good to use that solution instead of a modified mpc8xxx driver. That previous solution was not perfect and difficult to maintain (I have only ported it to Buildroot 2021.02 and stopped further maintenance).
Unfortunately, I don't see any implementation of host-side virtio-user-gpio that I could use to connect GUI to a machine emulated in QEMU.
Is there any library (preferably with Python bindings) that could facilitate this task? Should I start from scratch, implementing the virtio protocol for the GPIO device as defined in the specification?
Finally, after almost a year, I have found an answer to my question. Indeed, the right solution was based on Rust implementation of vhost-user-gpio. I have created my own fork, with the solution in the branch gpio-python.
I have modified the implementation of the MockGpioDevice, making it connect to the JSON RPC server using the simple HTTP transport:
use jsonrpc::Client;
use jsonrpc::simple_http::{self, SimpleHttpTransport};
use serde_json::json;
use serde_json::value::to_raw_value;
fn client() -> std::result::Result<Client,simple_http::Error> {
let url = "http://127.0.0.1:8001";
let t = SimpleHttpTransport::builder()
.url(url)?
.build();
Ok(Client::with_transport(t))
}
fn call(cli : &Client, fun : &str, param : serde_json::Value ) -> serde_json::Value
{
let raw_value = Some(to_raw_value(¶m).unwrap());
let request = cli.build_request(fun, raw_value.as_deref());
let response = cli.send_request(request).expect("send_request failed");
let resp2 : serde_json::Value = serde_json::from_str((*response.result.unwrap()).get()).unwrap();
return resp2;
}
The individual functions handling changes in the state of GPIOs do the RPC calls. For example the reading and writing of the GPIO pin is implemented as follows:
fn value(&self, gpio: u16) -> Result<u8> {
if self.value_result.is_err() {
return self.value_result;
}
let resp = call(&self.rpc_client,"value",json!([gpio]));
println!("{:?}",resp);
let val : u8 = resp[1].as_u64().unwrap().try_into().unwrap() ;
return Ok(val);
}
fn set_value(&self, gpio: u16, value: u32) -> Result<()> {
info!(
"gpio {} set value to {}",
self.gpio_names[gpio as usize], value
);
if self.set_value_result.is_err() {
return self.set_value_result;
}
let resp = call(&self.rpc_client,"set_value",json!([gpio,value]));
println!("{:?}",resp);
return Ok(());
}
The JSON RPC server is implemented in Python using tinyrpc. It is connected to the Gtk GUI adapted from my old solution.
The corresponding read and write pin implementations are very simple.
@dispatcher.public
def value(n):
return ("OK",gpios[n].val)
@dispatcher.public
def set_value(n,v):
gpios[n].val = v
rpc_server.change_handler(n,v)
return ("OK")
The field val
is modified by the send_change function in GUI:
def send_change(nof_pin, state):
do_irq = (rpc.gpios[nof_pin].val != state)
rpc.gpios[nof_pin].val = state
if do_irq:
with rpc.gpios[nof_pin].wait_both:
rpc.gpios[nof_pin].wait_both.notify_all()
if state == 1:
with rpc.gpios[nof_pin].wait_rise:
rpc.gpios[nof_pin].wait_rise.notify_all()
if state == 0:
with rpc.gpios[nof_pin].wait_fall:
rpc.gpios[nof_pin].wait_fall.notify_all()
That function also supports handling the GPIO-generated interrupts with the wait_for_interrupt function.
def wait_for_interrupt(n):
if gpios[n].irq_type == 1: # RISING
with gpios[n].wait_rise:
gpios[n].wait_rise.wait()
if gpios[n].irq_type == 2: # FALLING
with gpios[n].wait_fall:
gpios[n].wait_fall.wait()
if gpios[n].irq_type == 3: # BOTH
with gpios[n].wait_both:
gpios[n].wait_both.wait()
return ("OK",1)
The corresponding function in the vhost-device-gpio is the following:
fn wait_for_interrupt(&self, gpio: u16) -> Result<bool> {
if self.wait_for_irq_result.is_err() {
return self.wait_for_irq_result;
}
let resp = call(&self.rpc_client,"wait_for_interrupt",json!([gpio]));
println!("{:?}",resp);
let val : bool = resp[1].as_u64().unwrap() > 0;
return Ok(val);
}
For testing, I first started the GUI by running python3 gui3.py
in the virtual environment with installed: tinyrpc
, gevent
, werkzeug
, and pgi
.
Then I started the vhost-device-gpio:
LD_LIBRARY_PATH=/home/emb/libgpiod-2.1/lib/.libs/ ./vhost-device-gpio -s /tmp/gpio.sock -l s1
The LD_LIBRARY_PATH must be set because, in my Debian/testing machine, an old libgpiod is used. Therefore, I had to compile the new version in /home/emb/libgpiod-2.1
. Of course, the vhost-device-gpio
also needed to be built specially:
export PATH_TO_LIBGPIOD=/home/emb/libgpiod-2.1
export SYSTEM_DEPS_LIBGPIOD_NO_PKG_CONFIG=1
export SYSTEM_DEPS_LIBGPIOD_SEARCH_NATIVE="${PATH_TO_LIBGPIOD}/lib/.libs/"
export SYSTEM_DEPS_LIBGPIOD_LIB=gpiod
export SYSTEM_DEPS_LIBGPIOD_INCLUDE="${PATH_TO_LIBGPIOD}/include/"
cargo build --features "mock_gpio"
When GUI and vhost-device-gpio were started, I could run the QEMU with a guest OS connecting to my emulated GPIO. I have used Linux built for qemu-aarch64-virt
platform with Buildroot 2023.11.1 for that purpose.
Of course, I had to enable CONFIG_GPIO_VIRTIO=m
in the Linux kernel configuration. To emulate GPIO interrupts, the QEMU had to be patched, as described here.
The QEMU was started with the following additional arguments:
-chardev socket,path=/tmp/gpio.sock0,id=vgpio \
-device vhost-user-gpio-pci,chardev=vgpio,id=gpio \
-object memory-backend-file,id=mem,size=1G,mem-path=/dev/shm,share=on \
-numa node,memdev=mem \
After the emulated machine starts, it is possible to load the driver: modprobe gpio-virtio
. Then it is possible to test interrupts with gpiomon 0 12
, read pins with gpioget 0 4
or write them with gpioset 0 24
(of course you may change the pin numbers).
The described solution is only a proof of the concept. Error detection and handling are almost non-existent. Also, stopping the emulation when the guest waits for the interrupt may be difficult. However, I hope this may be a good starting point for further development.