Quantcast

Jump to content

» «
Photo

Direct3D 12 descriptor table + heap

No replies to this topic
The_GTA
  • The_GTA

    revenclaw

  • Members
  • Joined: 27 Dec 2012
  • Germany
  • Modding Milestone [Magic.TXD]

#1

Posted 28 August 2015 - 09:26 PM

Hello developers,

 

I love Direct3D 12 and its low level batched state management. Documentation is yet pretty sparse, so I would love to exchange some problem solving with you guys if you have yet put your hands on AM...eh I mean Microsoft's API.

 

Do you guys know what a descriptor table is? I try doing a basic thing, that is using a descriptor table, that is a range of pointer declarations in a memory buffer, but if fails! I thought Direct3D is supposed to be natural, so come on!

 

Anyway, here is the code that constructs the root signature. It is supposed to initialize a descriptor table with two ranges, each one holding one descriptor.

    // Create the root signature.
    {
        CD3DX12_DESCRIPTOR_RANGE ranges[2];
        CD3DX12_ROOT_PARAMETER rootParameters[1];

        ranges[1].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, 0);
        ranges[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_UAV, 1, 1, 0, 1);
        rootParameters[0].InitAsDescriptorTable(_countof(ranges), ranges, D3D12_SHADER_VISIBILITY_PIXEL);

        // Allow input layout and deny unneccessary access to certain pipeline stages.
        D3D12_ROOT_SIGNATURE_FLAGS rootSignatureFlags =
            D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT |
            D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS |
            D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS |
            D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS;// |
            //D3D12_ROOT_SIGNATURE_FLAG_DENY_PIXEL_SHADER_ROOT_ACCESS;

        CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc;
        rootSignatureDesc.Init(_countof(rootParameters), rootParameters, 0, nullptr, rootSignatureFlags);

        ComPtr<ID3DBlob> signature;
        ComPtr<ID3DBlob> error;
        ThrowIfFailed(D3D12SerializeRootSignature(&rootSignatureDesc, D3D_ROOT_SIGNATURE_VERSION_1, &signature, &error));
        ThrowIfFailed(m_device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&m_rootSignature)));
    }

So alright. I mean I create a layout for a descriptor heap, something like this... right?

descriptor_table_mapping_problem.png

But for some reason access to mybuf is mapped wrong! My test HLSL code just cannot access the structured buffer, entirely nope. I tried debugging it and it somehow redirects every write access into the constant buffer view!

 

Here is the code where I put the pointers, eh descriptors, into the descriptor heap... and kinda set it active in a command bundle!

    // Create the constant buffer.
    {
        ThrowIfFailed(m_device->CreateCommittedResource(
            &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD),
            D3D12_HEAP_FLAG_NONE,
            &CD3DX12_RESOURCE_DESC::Buffer(1024 * 64),
            D3D12_RESOURCE_STATE_GENERIC_READ,
            nullptr,
            IID_PPV_ARGS(&m_constantBuffer)));

        // Describe and create a constant buffer view.
        D3D12_CONSTANT_BUFFER_VIEW_DESC cbvDesc = {};
        cbvDesc.BufferLocation = m_constantBuffer->GetGPUVirtualAddress();
        cbvDesc.SizeInBytes = (sizeof(ConstantBuffer) + 255) & ~255;	// CB size is required to be 256-byte aligned.

        CD3DX12_CPU_DESCRIPTOR_HANDLE handle01( m_cbvHeap->GetCPUDescriptorHandleForHeapStart(), 0, this->m_cbvUavDescriptorSize );

        m_device->CreateConstantBufferView(&cbvDesc, handle01);

        // Initialize and map the constant buffers. We don't unmap this until the
        // app closes. Keeping things mapped for the lifetime of the resource is okay.
        ZeroMemory(&m_constantBufferData, sizeof(m_constantBufferData));
        ThrowIfFailed(m_constantBuffer->Map(0, nullptr, reinterpret_cast<void**>(&m_pCbvDataBegin)));
        memcpy(m_pCbvDataBegin, &m_constantBufferData, sizeof(m_constantBufferData));

        size_t texBufSize = ( sizeof(DepthColorSortSample) * this->m_width * this->m_height + 255 ) & ~255;

        ThrowIfFailed(m_device->CreateCommittedResource(
            &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD),
            D3D12_HEAP_FLAG_NONE,
            &CD3DX12_RESOURCE_DESC::Buffer(texBufSize),
            D3D12_RESOURCE_STATE_GENERIC_READ,
            nullptr,
            IID_PPV_ARGS(&m_uavEndBuffer)));

        D3D12_UNORDERED_ACCESS_VIEW_DESC uavDesc = {};
        uavDesc.ViewDimension = D3D12_UAV_DIMENSION_BUFFER;
        uavDesc.Format = DXGI_FORMAT_UNKNOWN;
        uavDesc.Buffer.FirstElement = 0;
        uavDesc.Buffer.CounterOffsetInBytes = 0;
        uavDesc.Buffer.Flags = D3D12_BUFFER_UAV_FLAG_NONE;
        uavDesc.Buffer.NumElements = ( m_width * m_height );
        uavDesc.Buffer.StructureByteStride = sizeof(DepthColorSortSample);

        CD3DX12_CPU_DESCRIPTOR_HANDLE handle02( m_cbvHeap->GetCPUDescriptorHandleForHeapStart(), 1, this->m_cbvUavDescriptorSize );

        m_device->CreateUnorderedAccessView( m_uavEndBuffer.Get(), NULL, &uavDesc, handle02 );

        // Map for eternity.
        ThrowIfFailed(m_uavEndBuffer->Map(0, nullptr, (void**)&m_uavEndDataBegin));

        // Zero out.
        memset( m_uavEndDataBegin, 0, texBufSize );
    }
(...)
    // Create and record the bundle.
    {
        ThrowIfFailed(m_device->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_BUNDLE, m_bundleAllocator.Get(), m_pipelineState.Get(), IID_PPV_ARGS(&m_bundle)));
        m_bundle->SetDescriptorHeaps(1, m_cbvHeap.GetAddressOf());
        m_bundle->SetGraphicsRootSignature(m_rootSignature.Get());
        m_bundle->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
        m_bundle->IASetVertexBuffers(0, 1, &m_vertexBufferView);
        m_bundle->SetGraphicsRootDescriptorTable(0, m_cbvHeap->GetGPUDescriptorHandleForHeapStart()); // LOOK HERE.
        m_bundle->DrawInstanced(3, 1, 0, 0);
        ThrowIfFailed(m_bundle->Close());
    }

What is going wrong? :(

 

* Does a descriptor table not support two ranges with a different type? (cbv and uav, wtf, docs says you can mix them)

* Have I made a mistake in the HLSL syntax? (I am using u1, as specified as base register 1 inside the descriptor table, so wtf again?)

* Is it a bug in D3D12? (like seriously, the docs are final)

* Is my conception about the descriptor table wrong? Isnt it a layout on how the GPU should access the descriptor heap? Doesnt it offset based on the range offsets you give in a range declaration of the descriptor table?

 

Maybe we can learn something out of this.

If you need more information, please tell me. I will deliver in a good format.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users