Pyrogenesis  trunk
Recommended usage patterns

Vulkan gives great flexibility in memory allocation.

This chapter shows the most common patterns.

See also slides from talk: Sawicki, Adam. Advanced Graphics Techniques Tutorial: Memory management in Vulkan and DX12. Game Developers Conference, 2018

GPU-only resource

When: Any resources that you frequently write and read on GPU, e.g. images used as color attachments (aka "render targets"), depth-stencil attachments, images/buffers used as storage image/buffer (aka "Unordered Access View (UAV)").

What to do: Let the library select the optimal memory type, which will likely have VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT.

VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
imgCreateInfo.extent.width = 3840;
imgCreateInfo.extent.height = 2160;
imgCreateInfo.extent.depth = 1;
imgCreateInfo.mipLevels = 1;
imgCreateInfo.arrayLayers = 1;
imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
allocCreateInfo.priority = 1.0f;
VkImage img;
vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);

Also consider: Consider creating them as dedicated allocations using VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT, especially if they are large or if you plan to destroy and recreate them with different sizes e.g. when display resolution changes. Prefer to create such resources first and all other GPU resources (like textures and vertex buffers) later. When VK_EXT_memory_priority extension is enabled, it is also worth setting high priority to such allocation to decrease chances to be evicted to system memory by the operating system.

Staging copy for upload

When: A "staging" buffer than you want to map and fill from CPU code, then use as a source od transfer to some GPU resource.

What to do: Use flag VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT. Let the library select the optimal memory type, which will always have VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT.

VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = 65536;
bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
VkBuffer buf;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
...
memcpy(allocInfo.pMappedData, myData, myDataSize);

Also consider: You can map the allocation using vmaMapMemory() or you can create it as persistenly mapped using VMA_ALLOCATION_CREATE_MAPPED_BIT, as in the example above.

Readback

When: Buffers for data written by or transferred from the GPU that you want to read back on the CPU, e.g. results of some computations.

What to do: Use flag VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT. Let the library select the optimal memory type, which will always have VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT and VK_MEMORY_PROPERTY_HOST_CACHED_BIT.

VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = 65536;
bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
VkBuffer buf;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
...
const float* downloadedData = (const float*)allocInfo.pMappedData;

Advanced data uploading

For resources that you frequently write on CPU via mapped pointer and freqnently read on GPU e.g. as a uniform buffer (also called "dynamic"), multiple options are possible:

  1. Easiest solution is to have one copy of the resource in HOST_VISIBLE memory, even if it means system RAM (not DEVICE_LOCAL) on systems with a discrete graphics card, and make the device reach out to that resource directly.
    • Reads performed by the device will then go through PCI Express bus. The performace of this access may be limited, but it may be fine depending on the size of this resource (whether it is small enough to quickly end up in GPU cache) and the sparsity of access.
  2. On systems with unified memory (e.g. AMD APU or Intel integrated graphics, mobile chips), a memory type may be available that is both HOST_VISIBLE (available for mapping) and DEVICE_LOCAL (fast to access from the GPU). Then, it is likely the best choice for such type of resource.
  3. Systems with a discrete graphics card and separate video memory may or may not expose a memory type that is both HOST_VISIBLE and DEVICE_LOCAL, also known as Base Address Register (BAR). If they do, it represents a piece of VRAM (or entire VRAM, if ReBAR is enabled in the motherboard BIOS) that is available to CPU for mapping.
    • Writes performed by the host to that memory go through PCI Express bus. The performance of these writes may be limited, but it may be fine, especially on PCIe 4.0, as long as rules of using uncached and write-combined memory are followed - only sequential writes and no reads.
  4. Finally, you may need or prefer to create a separate copy of the resource in DEVICE_LOCAL memory, a separate "staging" copy in HOST_VISIBLE memory and perform an explicit transfer command between them.

Thankfully, VMA offers an aid to create and use such resources in the the way optimal for the current Vulkan device. To help the library make the best choice, use flag VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT together with VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT. It will then prefer a memory type that is both DEVICE_LOCAL and HOST_VISIBLE (integrated memory or BAR), but if no such memory type is available or allocation from it fails (PC graphics cards have only 256 MB of BAR by default, unless ReBAR is supported and enabled in BIOS), it will fall back to DEVICE_LOCAL memory for fast GPU access. It is then up to you to detect that the allocation ended up in a memory type that is not HOST_VISIBLE, so you need to create another "staging" allocation and perform explicit transfers.

VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
bufCreateInfo.size = 65536;
bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
VmaAllocationCreateInfo allocCreateInfo = {};
allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
VkBuffer buf;
vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
VkMemoryPropertyFlags memPropFlags;
vmaGetAllocationMemoryProperties(allocator, alloc, &memPropFlags);
if(memPropFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
{
// Allocation ended up in a mappable memory and is already mapped - write to it directly.
// [Executed in runtime]:
memcpy(allocInfo.pMappedData, myData, myDataSize);
}
else
{
// Allocation ended up in a non-mappable memory - need to transfer.
VkBufferCreateInfo stagingBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
stagingBufCreateInfo.size = 65536;
stagingBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
VmaAllocationCreateInfo stagingAllocCreateInfo = {};
stagingAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
VkBuffer stagingBuf;
VmaAllocation stagingAlloc;
VmaAllocationInfo stagingAllocInfo;
vmaCreateBuffer(allocator, &stagingBufCreateInfo, &stagingAllocCreateInfo,
&stagingBuf, &stagingAlloc, stagingAllocInfo);
// [Executed in runtime]:
memcpy(stagingAllocInfo.pMappedData, myData, myDataSize);
//vkCmdPipelineBarrier: VK_ACCESS_HOST_WRITE_BIT --> VK_ACCESS_TRANSFER_READ_BIT
VkBufferCopy bufCopy = {
0, // srcOffset
0, // dstOffset,
myDataSize); // size
vkCmdCopyBuffer(cmdBuf, stagingBuf, buf, 1, &bufCopy);
}

Other use cases

Here are some other, less obvious use cases and their recommended settings: