-
Notifications
You must be signed in to change notification settings - Fork 4
04 Resources and API Objects
This section will briefly explain and show examples of creating swapchains and variety of other resources using LinaGX.
Throughout LinaGX API, whenever you come accross m_lgx->CreateXXX
, there is very high-likely an equivalent m_lgx->DestroyXXX
function. Please be advised that you are responsible for cleaning up your resources.
m_swapchain = m_lgx->CreateSwapchain({
.format = Format::B8G8R8A8_SRGB,
.x = 0,
.y = 0,
.width = m_windowX,
.height = m_windowY,
.window = m_window->GetWindowHandle(),
.osHandle = m_window->GetOSHandle(),
.isFullscreen = false,
.vsyncStyle = {VKVsync::None, DXVsync::None},
});
You should create a swapchain per-application window. If you want to recreate a swapchain for window resize, do not destroy it, but use RecreateSwapchain method:
SwapchainRecreateDesc resizeDesc = {
.swapchain = m_swapchain,
.width = w,
.height = h,
.isFullscreen = w == monitor.x && h == monitor.y,
};
m_lgx->RecreateSwapchain(resizeDesc);
In the cases where you don't want to destroy a swapchain, but simply deactivate it, for example when one of the application windows gets hidden, you could use:
m_lgx->SetSwapchainActive(mySwapchain, false);
This will effectively remove swapchain from submission works, making sure we do not unnecessarily try to acquire an image for it.
LinaGX::ResourceDesc matDataResource = {
.size = sizeof(GPUMaterialData),
.typeHintFlags = LinaGX::TH_ConstantBuffer,
.heapType = LinaGX::ResourceHeap::StagingHeap,
.debugName = "Material Data Staging",
};
uint32 myResource = m_lgx->CreateResource(matDataResource);
Creating resources are straight-forward, you could create them on CPU, GPU or CPU visible GPU memory. Use type hint flags to hint LinaGX about the type of the resource.
uint8* mapping = nullptr;
m_lgx->MapResource(myResource, mapping);
std::memcpy(mapping, &mat.gpuMat, sizeof(GPUMaterialData));
After creating a resource, you could easily map it on a cpu pointer. You don't have to Unmap(), when resources are destroyed they will effectively be unmapped.
Similar to resources, texture creation is pretty much the same:
std::vector<LinaGX::ViewDesc> views;
views.push_back({0, 0, 0, 0, true});
LinaGX::TextureDesc desc{
.type = LinaGX::TextureType::Texture2D,
.format = LinaGX::Format::R16G16B16A16_SFLOAT,
.views = views,
.flags = TextureFlags::TF_ColorAttachment | TextureFlags::TF_Sampled | TextureFlags::TF_CopyDest | TextureFlags::TF_Cubemap,
.width = width,
.height = height,
.arrayLength = 6,
.debugName = debugName,
};
txt.gpuHandle = m_lgx->CreateTexture(desc);
You could define texture format, how many views for this texture to be created -which can be indexed later in texture operations-, texture usage flags, bounds, mip and array levels etc.
LinaGX::SamplerDesc clampSampler = {
.minFilter = Filter::Linear,
.magFilter = Filter::Linear,
.mode = SamplerAddressMode::ClampToEdge,
.mipmapMode = MipmapMode::Linear,
.anisotropy = 0,
.minLod = 0.0f,
.maxLod = 1.0f,
.mipLodBias = 6.0f,
};
m_samplers.push_back(m_lgx->CreateSampler(clampSampler));
Queue creation in LinaGX works similarly to DirectX12. Unlike Vulkan, you are able to create as many queues as you want, and LinaGX internally handles and maps those virtual queues to physical queues in the background.
LinaGX::QueueDesc desc = {
.type = LinaGX::CommandType::Compute,
.debugName = "Compute Queue",
};
m_lgx->CreateQueue(desc);
LinaGX internally creates 3 queues for you, Graphics, Transfer and Compute, which can be accessed via:
uint8 primQueue = m_lgx->GetPrimaryQueue(LinaGX::CommandType::Compute);
These primary queues are created and deleted by LinaGX, you don't have to do anything regarding their lifetime.
Users can create custom semaphore objects, increment their values and signal them during queue submission, or wait on them. If you are familiar with DX12 or Vulkan Timeline Semaphores, LinaGX takes the exact same approach to queue synchronization with user semaphores, avoiding binary semaphores but using a value incrementation based approach.
pfd.transferSemaphore = m_lgx->CreateUserSemaphore();
If you have used a semaphore in a queue submission work, you could wait on it on the CPU as well:
m_lgx->WaitForUserSemaphore(pfd.transferSemaphore, pfd.lastTransferValue);
LinaGX::DescriptorBinding binding0 = {
.descriptorCount = 1,
.type = LinaGX::DescriptorType::UBO,
.stages = {LinaGX::ShaderStage::Vertex, LinaGX::ShaderStage::Fragment},
};
LinaGX::DescriptorBinding binding1 = {
.descriptorCount = 4,
.type = LinaGX::DescriptorType::SeparateImage,
.stages = {LinaGX::ShaderStage::Fragment},
};
LinaGX::DescriptorSetDesc desc = {.bindings = {binding0, binding1}};
uint16 mySet = m_lgx->CreateDescriptorSet(desc);
LinaGX uses GLSL and takes a Vulkan-based approach to managing resource bindings. For each set you use in your shader, you have to bind an appropriate set, and first you need to create one to do so. When creating the set, you define each binding, their types, descriptor counts, stages and additional properties such as if they are mutable, or unbounded.
After you have your sets, you can update them with resources or images:
LinaGX::DescriptorUpdateImageDesc imgUpdate = {
.setHandle = mySet,
.binding = 1,
.textures = {m_textures[mat.gpuMat.baseColor].gpuHandle, m_textures[mat.gpuMat.normal].gpuHandle, m_textures[mat.gpuMat.metallicRoughness].gpuHandle, dummyTexture},
};
m_lgx->DescriptorUpdateImage(imgUpdate);
Above example updates the set's binding #1 with the given 3 textures. You could also plug-in some samplers to update there, however in this example the descriptor type is Texture2D instead of a Sampler2D.
Below is an another example updating a descriptor set with a gpu resource:
LinaGX::DescriptorUpdateBufferDesc update = {
.setHandle = mySet,
.binding = 0,
.buffers = {mat.gpuResources[i]},
};
m_lgx->DescriptorUpdateBuffer(update);
Although you can automate pipelinle layout creation by using reflection information from LinaGX, it is still possible to create custom pipeline layouts.
LinaGX::DescriptorBinding binding0 = {
.descriptorCount = 1,
.type = LinaGX::DescriptorType::UBO,
.stages = {LinaGX::ShaderStage::Vertex, LinaGX::ShaderStage::Fragment},
};
LinaGX::DescriptorBinding binding1 = {
.descriptorCount = 1,
.type = LinaGX::DescriptorType::SSBO,
.stages = {LinaGX::ShaderStage::Vertex, LinaGX::ShaderStage::Fragment},
};
LinaGX::DescriptorBinding binding2 = {
.descriptorCount = 3,
.type = LinaGX::DescriptorType::SeparateSampler,
.stages = {LinaGX::ShaderStage::Vertex, LinaGX::ShaderStage::Fragment},
};
LinaGX::DescriptorSetDesc desc = {
.bindings = {binding0, binding1, binding2},
};
LinaGX::PipelineLayoutDesc pipelineLayoutSetGlobal = {
.descriptorSetDescriptions = {desc},
.constantRanges = {{{LinaGX::ShaderStage::Fragment, LinaGX::ShaderStage::Vertex}, sizeof(GPUConstants)}},
};
m_pipelineLayouts[PipelineLayoutType::PL_GlobalSet] = m_lgx->CreatePipelineLayout(pipelineLayoutSetGlobal);
Above example first creates a descriptor set description, and then creates a pipeline layout that uses only 1 descriptor set using that description.