I'm working on an interop between DirectX 11 and DirectX 9 where I'm first creating a D3D11 Texture, get the shared DXGI handle and then create a D3D9 texture using the handle. For device creation I use the default adapter for DX11 and DX9.
As long as I have "Auto-Select" or "Integrated graphics" set as the preferred graphics processor in the NVIDIA Control Panel it works fine, but when I choose the high performance option it fails. The device creation also works fine in this case and my dedicated GPU is selected as expected.
The error I get is "Value does not fall within the expected range", when calling CreateTexture on my IDirect3DDevice9Ex. Does texture sharing like this work on the GPU or do I miss something in my configuration of the resources?
I'm using the Silk.NET wrapper for DirectX. Here are my snippets for the texture creation
First for the D3D11 texture:
var texture2DDesc = new Texture2DDesc()
{
BindFlags = (uint)(BindFlag.RenderTarget),
Format = Silk.NET.DXGI.Format.FormatB8G8R8A8Unorm,
Width = width,
Height = height,
MipLevels = 1,
SampleDesc = new SampleDesc(1, 0),
Usage = Usage.Default,
MiscFlags = (uint)ResourceMiscFlag.None,
CPUAccessFlags = 0,
ArraySize = 1
};
Then getting the shared handle from IDXGIResource interface via GetSharedHandle and using it to create the D3D9 texture:
d3D9ExDevice.CreateTexture(
Width: width,
Height: height,
Levels: 1,
Usage: D3D9.UsageRendertarget,
Format: Silk.NET.Direct3D9.Format.A8R8G8B8,
Pool: Pool.Default,
ppTexture: ref d3D9Texture,
pSharedHandle: &sharedHandle
);
This function fails.
Found the problem:
There was a flag missing in Texture2DDesc and if you set the following it works.
BindFlags = (uint)(BindFlag.RenderTarget | BindFlag.ShaderResource)