Voice assistants interact with a wide range of skills, each potentially developed in a different language (Python, Java, etc.). I'm curious how the core system manages this without requiring everything to be written in multiple languages (which I would assume is mostly just wrappers for supporting the other programming languages). The program I'm working on uses Python for the core, but if there aren't many solutions for Python, I would be happy to rewrite the core in another language. It is not that big anyways.
I haven't found much information on the specifics of how language variety is handled within these platforms. I was hoping to learn about some common approaches used by voice assistants to enable plugins built with different languages. The only solution I found myself was to run each non-Python file through subprocess, but I want to know if there are other alternatives.
If I'm understanding your question correctly, two programs whether its written in the same or different language they communicate using APIs (Application programming interface). In the case of Alexa Skills it uses a RESTful HTTP API. And its very likely that other systems that can work with other programs they'll use an HTTP API.
You can see that Alexa Skills provides an HTTP API called SMAPI (Alexa Skill Management API) and the endpoint is https://api.amazonalexa.com
. The SDKs Amazon provides (Python, NodeJS and Java) are libraries that provides functions for you to easily interact with Alexa Skills but under the hood you'll see that they are using the SMAPI. You could use another language, but you'll have to create your own functions/methods that point to the corresponding methods.
Amazon has great documentation for example on how request processing works in Python.