I'm starting out with Kafka.
I see that I'm able to pass headers when producing messages.
Traditionally one would have a web client (single page app) where to user logs in via some remote oidc idp and receives a token. That token is then sent via Authentication: Bearer token-here
header to some RESTful backend where the token is checked for validity and the payload is processed, saved to database or other and something is returned or not.
Now there's Apache Kafka. It has a REST proxy. I can pass headers to the REST proxy and produce messages, or consume them, but I'm interested in the "secure my RESTful JSON API" part.
Currently, without Kafka, I have either a oidc proxy (using keycloak, that's keycloak-gatekeeper) that does the filtering of which request makes it to the backend, or I have a oidc client that does token validation as some middleware function inside the backend. In any case invalid requests doesn't get "logged" as they would in Kafka, I assume.
Where does oidc token validation and request filtering fit in the Kafka/Confluent ecosystem?
Assume we have a SPA that talks to the Confluent REST Proxy. Some logged in user wants to post messages and some non-logged in user should not be able to.
How does Kafka and/or its tools deal with that scenario?
Kafka commonly uses SASL and other Authorization plugins to prevent access.
Certificates would be distributed amongst clients (here, that is the REST Proxy). You would need other proxies or plugins around that to prevent further access or audit the requests, as with any other web server.
HTTPS certificates would be used to secure traffic to the REST proxy, but seems you're asking about something more specific.
There is no reference to OpenID in the documentation, only LDAP RBAC, as a commercial offering