azure-keyvault

Azure KeyVault Certificate with non-exportable key can still export the key through KeyVault Secret


For digital signature of files, I have a PKCS12 in hand with all necessary key material (e.g. private key + signature certificate + certificate chain).

The PKCS12 is built with something that looks like :

openssl pkcs12 -in cert_chain.pem -inkey private.key -passin "pass:xx" -export -passout "pass:yy" -out signature.pfx

My intent is to import this material into Azure KeyVault to have it perform the actual signature computation, without my own software having actual access to the private key.

(This is not, of course, a real HSM signature, but the intent is more or less the same : my software would only have access to Azure KeyVault's signing API)

But I can't seem to achieve this.

Even marking the key as not exportable, I always find a way to access the private key, one way or another, through azure APIs.

Long story short, the question : how to isolate the private key, so that an application with access to the keyvault (policy permissions: Certificate Get, Sign, Secret Get), can not access it ?

Let's see what I have so far:

As far as I understand, this is possible with KeyVault's Certificate policy, marking the key as exportable: false

I use azure-cli for the import operation :

az keyvault certificate import \
  --vault-name "someKv" \
  --name "someName" \
  -f signature.pfx \
  --password yy \
  --policy "@cert_policy.json"

With the policy being :

{
  "keyProperties": {"exportable": false},
  "secretProperties": {"contentType": "application/x-pkcs12"}
}

At this point I have something strange, which is my policy is not actively taken into account (e.g. in the output I still see the key marked as exportable: true).

I know my JSON file is taken into account by the call because if I change it, I have JSON parsing errors from the CLI.

This may be the core of my issue... But I'm stubborn. So I use a raw REST API call to force the setting :

curl -XPATCH \
  -H 'Content-Type: application/json' \
  -H "Authorization: Bearer someToken" \
  "https://someKv.vault.azure.net/certificates/myCertName/policy?api-version=7.4" \
  -d '{"key_props": {"exportable": false}}'

Which succeeds (HTTP 200/OK), and following this call I can see :

> az keyvault certificate show --vault-name "someKv" --name signature 
...
  "policy": {
    "attributes": { ... },
    ...
    "keyProperties": {
      ...
      "exportable": false,
      ...
    },

So I feel OK.

For example, I use the Java SDK in my app, and using azure-security-keyvault-jca facilities, when I try accessing the Key, I get not a RSAPrivateKey but a KeyVaultPrivateKey with its Keyvault access attributes, which (if you want to follow the source), clearly is what is expected for non-exportable keys.

I still feel OK.

Then, I implement my signature, and I need access to the certificate chain (indeed, my actual CA uses an intermediate certificate between my own cert and its root, so I have to embed it in the signature).

Looking around, I find no other way to access it in my setup, than to use Azure's KeyVault Secret API (when importing a KeyVault Certificate, Azure creates a KeyVault Secret alongside). The secret's content is a (base64-ed) PKCS12.

I open this PKCS12, it has a single entry (just like my original, imported one), and associated to this alias, the certificate, its chain... so what about the private key :

System.out.println(
  ks.getKey(aliases.get(0), "".toCharArray())
)
> SunRsaSign RSA private CRT key, 2048 bits
  params: null
  modulus: (a real value)
  private exponent: (a real value)

Oups... My software has access to the private key (incidentally, any Azure user with sufficient Keyvault privilege could also access the key, which is not absurd, but not particularly expected from exportable: false). This is not what I intended.

Now I could remove the Get Secret permission to my app on the KeyVault's policy (well in fact I couldn't for other app's features, but let's admit), but then I could not access the secret at all, which prevents me from accessing the certificate chain (which I admittedly could ship with my app, but it feels duplicating information, which makes rolling and maintenance a more complex matter).


Solution

  • Answering my own question.

    It seems the fact that

    At this point I have something strange, which is my policy is not actively taken into account (e.g. in the output I still see the key marked as exportable: true).

    Is the key matter here. It looks as if changing the exportable policy after the initial upload is not supported.

    As a "proof", I switched the certificate import method from az cli to azure REST api :

    curl -XPOST \
      -H 'Content-Type: application/json' \
      -H "Authorization: Bearer ..." \
      "https://someVaule.vault.azure.net/certificates/myCert/import?api-version=7.4" \
      --data-raw "{\"value\": \"$(cat sign.pfx | openssl base64 -A)\", \"policy\": {\"key_props\": {\"exportable\": false}, \"secret_props\": {\"contentType\": \"application/x-pkcs12\"}}, \"pwd\": \"somePwd\"}"
    

    Using this upload, the imported object is immedialty marked as exportable: false and later downloading the associated secret outputs a PKCS12 with no private key (which, by the way, is a pretty strange beast to work with, at least in Java).

    So in a nutshell possible misuse or bug of azure CLI lead me to import a certificate originnaly a exportable. Changing the exportable nature after the import operation looks like it works, but does not. Using the REST API worksaround the issue.