--- /dev/null
+Sparse resources tests
+
+Tests:
+
+dEQP-VK.sparse_resources.*
+
+Includes:
+
+1. Test fully resident buffer created with VK_BUFFER_CREATE_SPARSE_BINDING_BIT flag bit
+2. Test fully resident image created with VK_IMAGE_CREATE_SPARSE_BINDING_BIT flag bit
+3. Test partially resident buffer created with VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT flag bit
+4. Test partially resident image created with VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT flag bit
+5. Test partially resident image with mipmaps, put some mipmap levels in mip tail region
+6. Test memory aliasing for fully resident buffer objects
+
+Description:
+
+1. Test fully resident buffer created with VK_BUFFER_CREATE_SPARSE_BINDING_BIT flag bit
+
+The test creates buffer object with VK_BUFFER_CREATE_SPARSE_BINDING_BIT flag bit. The size of the buffer is one
+of the test parameters. The memory requirements of the buffer are being checked. Device memory is allocated
+in chunks equal to the alignment parameter of buffer's memory requirements. The number of allocations is equal to
+bufferRequirements.size / bufferRequirements.alignment.
+
+The test creates two queues - one supporting sparse binding operations, the second one supporting compute and transfer operations.
+
+First queue is used to perform binding of device memory to sparse buffer. The binding operation signals semaphore
+used for synchronization.
+
+The second queue is used to perform transfer operations. The test creates two non-sparse buffer objects,
+one used as input and the second as output. The input buffer is used to transfer data to sparse buffer. The data is then
+transfered further from sparse buffer to output buffer. The transer queue waits on a semaphore, before transfer operations
+can be issued.
+
+The validation part retrieves data back from output buffer to host memory. The data is then compared with reference data,
+that was originally sent to input buffer. If the two data sets match, the test passes.
+
+2. Test fully resident image created with VK_IMAGE_CREATE_SPARSE_BINDING_BIT flag bit
+
+The test checks all supported types of images. It creates image with VK_IMAGE_CREATE_SPARSE_BINDING_BIT flag bit.
+The memory requirements of the image are being checked. Device memory is allocated in chunks equal to the alignment parameter
+of the image memory requirements. The number of allocations is equal to imageRequirements.size / imageRequirements.alignment.
+
+The test creates two queues - one supporting sparse binding operations, the second one supporting compute and transfer operations.
+
+First queue is used to perform binding of device memory to sparse image. The binding operation signals semaphore
+used for synchronization.
+
+The second queue is used to perform transfer operations. The test creates two non-sparse buffer objects,
+one used as input and the second as output. The input buffer is used to transfer data to sparse image. The data is then
+transfered further from sparse image to output buffer. The transfer queue waits on a semaphore, before transfer operations
+can be issued.
+
+The validation part retrieves data back from output buffer to host memory. The data is then compared with reference data,
+that was originally sent to input buffer. If the two data sets match, the test passes.
+
+3. Test partially resident buffer created with VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT flag bit
+
+The test creates buffer object with VK_BUFFER_CREATE_SPARSE_RESIDENCY_BIT flag bit. The size of the buffer is one
+of the test parameters. The sparse memory requirements of the buffer are being checked. Device memory is allocated
+in chunks equal to the alignment parameter of buffer's memory requirements. Memory is bound to the buffer object leaving gaps
+between bound blocks with the size equal to alignment.
+
+The test creates two queues - one supporting sparse binding operations, the second one supporting compute and transfer operations.
+
+First queue is used to perform binding of device memory to sparse buffer. The binding operation signals semaphore
+used for synchronization.
+
+The second queue is used to perform compute and transfer operations. A compute shader is invoked to fill the whole buffer with data.
+Afterwards the data is transfered from sparse buffer to non-sparse output buffer.
+
+The validation part retrieves data back from output buffer to host memory. The data is compared against the expected output
+from compute shader. For parts of the data that correspond to the regions of sparse buffer that have device memory bound, the comparison is done
+against expected output from compute shader. For parts that correspond to gaps, the data is random or should be filled with zeros if
+residencyNonResidentStrict device sparse property is set to TRUE.
+
+4. Test partially resident image created with VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT flag bit
+
+The test creates image with VK_IMAGE_CREATE_SPARSE_RESIDENCY_BIT flag bit. The sparse memory requirements of the image are being checked.
+Device memory is allocated in chunks equal to the alignment parameter of image's memory requirements.
+Memory is bound to the image leaving gaps between bound blocks with the size equal to alignment.
+
+The test creates two queues - one supporting sparse binding operations, the second one supporting compute and transfer operations.
+
+First queue is used to perform binding of device memory to sparse image. The binding operation signals semaphore
+used for synchronization.
+
+The second queue is used to perform compute and transfer operations. A compute shader is invoked to fill the whole image with data.
+Afterwards the data is transfered from sparse image to non-sparse output buffer.
+
+The validation part retrieves data back from output buffer to host memory. The data is compared against the expected output
+from compute shader. For parts of the data that correspond to the regions of image that have device memory bound, the comparison is done
+against expected output from compute shader. For parts that correspond to gaps, the data is random or should be filled with zeros if residencyNonResidentStrict
+device sparse property is set to TRUE.
+
+5. Test partially resident image with mipmaps, put some mipmap levels in mip tail region
+
+The test creates image with maximum allowed number of mipmap levels. The sparse memory requirements of the image are being checked.
+Each layer of each mipmap level receives a separate device memory binding. The mipmaps levels that end up in mip tail region receive one
+binding for each mipmap level or one binding for all levels, depending on the value of VK_SPARSE_IMAGE_FORMAT_SINGLE_MIPTAIL_BIT.
+
+A compute shader is invoked to fill each mipmap level with data. Afterwards the data is transfered to a non-sparse buffer object.
+
+The validation part retrieves data back from output buffer to host memory. The data is compared against the expected output
+from compute shader. The test passes if the data sets are equal.
+
+6. Test memory aliasing for fully resident buffer objects
+
+The test creates two fully resident buffers (READ and WRITE) with VK_BUFFER_CREATE_SPARSE_ALIASED_BIT
+and VK_BUFFER_CREATE_SPARSE_BINDING_BIT flag bits. Both buffers have the same size.
+
+The test creates two queues - one supporting sparse binding operations, the second one supporting compute and transfer operations.
+
+First queue is used to perform binding of device memory to sparse buffers. One block of device memory is allocated
+and bound to both buffers (buffers share memory).
+
+The second queue is used to perform compute and transfer operations. A compute shader is invoked to fill the whole WRITE buffer with data.
+Afterwards the data from READ buffer is being transfered to non-sparse output buffer.
+
+The validation part retrieves data back from output buffer to host memory. The data is compared against the expected output
+from compute shader. The test passes if the data sets are equal.
\ No newline at end of file
add_subdirectory(draw)
add_subdirectory(compute)
add_subdirectory(image)
+add_subdirectory(sparse_resources)
include_directories(
api
draw
compute
image
+ sparse_resources
)
set(DEQP_VK_COMMON_SRCS
deqp-vk-draw
deqp-vk-compute
deqp-vk-image
+ deqp-vk-sparse-resources
)
add_library(deqp-vk-common STATIC ${DEQP_VK_COMMON_SRCS})
--- /dev/null
+include_directories(..)
+
+set(DEQP_VK_IMAGE_SRCS
+ vktSparseResourcesImageSparseBinding.cpp
+ vktSparseResourcesImageSparseBinding.hpp
+ vktSparseResourcesBufferSparseBinding.cpp
+ vktSparseResourcesBufferSparseBinding.hpp
+ vktSparseResourcesBase.cpp
+ vktSparseResourcesBase.hpp
+ vktSparseResourcesTests.cpp
+ vktSparseResourcesTests.hpp
+ vktSparseResourcesTestsUtil.cpp
+ vktSparseResourcesTestsUtil.hpp
+ )
+
+set(DEQP_VK_IMAGE_LIBS
+ tcutil
+ vkutil
+ )
+
+add_library(deqp-vk-sparse-resources STATIC ${DEQP_VK_IMAGE_SRCS})
+target_link_libraries(deqp-vk-sparse-resources ${DEQP_VK_IMAGE_LIBS})
--- /dev/null
+/*------------------------------------------------------------------------
+ * Vulkan Conformance Tests
+ * ------------------------
+ *
+ * Copyright (c) 2016 The Khronos Group Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and/or associated documentation files (the
+ * "Materials"), to deal in the Materials without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Materials, and to
+ * permit persons to whom the Materials are furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+ *
+ *//*!
+ * \file vktSparseResourcesBase.cpp
+ * \brief Sparse Resources Base Instance
+ *//*--------------------------------------------------------------------*/
+
+#include "vktSparseResourcesBase.hpp"
+#include "vkRefUtil.hpp"
+#include "vkQueryUtil.hpp"
+
+using namespace vk;
+
+namespace vkt
+{
+namespace sparse
+{
+
+struct QueueFamilyQueuesCount
+{
+ QueueFamilyQueuesCount() : queueCount(0u), counter(0u) {};
+
+ deUint32 queueCount;
+ deUint32 counter;
+};
+
+SparseResourcesBaseInstance::SparseResourcesBaseInstance (Context &context)
+ : TestInstance(context)
+{
+}
+
+bool SparseResourcesBaseInstance::createDeviceSupportingQueues (const QueueRequirementsVec& queueRequirements)
+{
+ const InstanceInterface& instance = m_context.getInstanceInterface();
+ const DeviceInterface& deviceInterface = m_context.getDeviceInterface();
+ const VkPhysicalDevice physicalDevice = m_context.getPhysicalDevice();
+
+ deUint32 queueFamilyPropertiesCount = 0u;
+ instance.getPhysicalDeviceQueueFamilyProperties(physicalDevice, &queueFamilyPropertiesCount, DE_NULL);
+
+ if (queueFamilyPropertiesCount == 0u)
+ {
+ return false;
+ }
+
+ std::vector<VkQueueFamilyProperties> queueFamilyProperties;
+ queueFamilyProperties.resize(queueFamilyPropertiesCount);
+
+ instance.getPhysicalDeviceQueueFamilyProperties(physicalDevice, &queueFamilyPropertiesCount, &queueFamilyProperties[0]);
+
+ typedef std::map<deUint32, QueueFamilyQueuesCount> SelectedQueuesMap;
+
+ SelectedQueuesMap selectedQueueFamilies;
+
+ for (deUint32 queueReqNdx = 0; queueReqNdx < queueRequirements.size(); ++queueReqNdx)
+ {
+ const QueueRequirements queueRequirement = queueRequirements[queueReqNdx];
+ const deUint32 queueFamilyIndex = findMatchingQueueFamilyIndex(queueFamilyProperties, queueRequirement.queueFlags);
+
+ if (queueFamilyIndex == NO_MATCH_FOUND)
+ {
+ return false;
+ }
+
+ selectedQueueFamilies[queueFamilyIndex].queueCount += queueRequirement.queueCount;
+ }
+
+ std::vector<VkDeviceQueueCreateInfo> queueInfos;
+ const float queuePriority = 1.0f;
+
+ for (SelectedQueuesMap::iterator queueFamilyIter = selectedQueueFamilies.begin(); queueFamilyIter != selectedQueueFamilies.end(); ++queueFamilyIter)
+ {
+ VkDeviceQueueCreateInfo queueInfo;
+ deMemset(&queueInfo, 0, sizeof(queueInfo));
+
+ queueInfo.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO;
+ queueInfo.pNext = DE_NULL;
+ queueInfo.flags = (VkDeviceQueueCreateFlags)0u;
+ queueInfo.queueFamilyIndex = queueFamilyIter->first;
+ queueInfo.queueCount = queueFamilyIter->second.queueCount;
+ queueInfo.pQueuePriorities = &queuePriority;
+
+ queueInfos.push_back(queueInfo);
+ }
+
+ VkDeviceCreateInfo deviceInfo;
+ deMemset(&deviceInfo, 0, sizeof(deviceInfo));
+
+ VkPhysicalDeviceFeatures deviceFeatures;
+ instance.getPhysicalDeviceFeatures(physicalDevice, &deviceFeatures);
+
+ deviceInfo.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO;
+ deviceInfo.pNext = DE_NULL;
+ deviceInfo.enabledExtensionCount = 0u;
+ deviceInfo.ppEnabledExtensionNames = DE_NULL;
+ deviceInfo.enabledLayerCount = 0u;
+ deviceInfo.ppEnabledLayerNames = DE_NULL;
+ deviceInfo.pEnabledFeatures = &deviceFeatures;
+ deviceInfo.queueCreateInfoCount = (deUint32)selectedQueueFamilies.size();
+ deviceInfo.pQueueCreateInfos = &queueInfos[0];
+
+ m_logicalDevice = vk::createDevice(instance, physicalDevice, &deviceInfo);
+
+ for (deUint32 queueReqNdx = 0; queueReqNdx < queueRequirements.size(); ++queueReqNdx)
+ {
+ const QueueRequirements queueRequirement = queueRequirements[queueReqNdx];
+ const deUint32 queueFamilyIndex = findMatchingQueueFamilyIndex(queueFamilyProperties, queueRequirement.queueFlags);
+
+ if (queueFamilyIndex == NO_MATCH_FOUND)
+ {
+ return false;
+ }
+
+ for (deUint32 queueNdx = 0; queueNdx < queueRequirement.queueCount; ++queueNdx)
+ {
+ VkQueue queueHandle = 0;
+ deviceInterface.getDeviceQueue(*m_logicalDevice, queueFamilyIndex, selectedQueueFamilies[queueFamilyIndex].counter++, &queueHandle);
+
+ Queue queue;
+ queue.queueHandle = queueHandle;
+ queue.queueFamilyIndex = queueFamilyIndex;
+
+ m_queues[queueRequirement.queueFlags].push_back(queue);
+ }
+ }
+
+ return true;
+}
+
+const Queue& SparseResourcesBaseInstance::getQueue (const VkQueueFlags queueFlags, const deUint32 queueIndex)
+{
+ return m_queues[queueFlags][queueIndex];
+}
+
+deUint32 SparseResourcesBaseInstance::findMatchingMemoryType (const VkPhysicalDeviceMemoryProperties& deviceMemoryProperties,
+ const VkMemoryRequirements& objectMemoryRequirements,
+ const MemoryRequirement& memoryRequirement) const
+{
+ for (deUint32 memoryTypeNdx = 0; memoryTypeNdx < deviceMemoryProperties.memoryTypeCount; ++memoryTypeNdx)
+ {
+ if ((objectMemoryRequirements.memoryTypeBits & (1u << memoryTypeNdx)) != 0 &&
+ memoryRequirement.matchesHeap(deviceMemoryProperties.memoryTypes[memoryTypeNdx].propertyFlags))
+ {
+ return memoryTypeNdx;
+ }
+ }
+
+ return NO_MATCH_FOUND;
+}
+
+deUint32 SparseResourcesBaseInstance::findMatchingQueueFamilyIndex (const QueueFamilyPropertiesVec& queueFamilyProperties,
+ const VkQueueFlags queueFlags) const
+{
+ for (deUint32 queueNdx = 0; queueNdx < queueFamilyProperties.size(); ++queueNdx)
+ {
+ if ((queueFamilyProperties[queueNdx].queueFlags & queueFlags) == queueFlags)
+ {
+ return queueNdx;
+ }
+ }
+
+ return NO_MATCH_FOUND;
+}
+
+} // sparse
+} // vkt
--- /dev/null
+#ifndef _VKTSPARSERESOURCESBASE_HPP
+#define _VKTSPARSERESOURCESBASE_HPP
+/*------------------------------------------------------------------------
+ * Vulkan Conformance Tests
+ * ------------------------
+ *
+ * Copyright (c) 2016 The Khronos Group Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and/or associated documentation files (the
+ * "Materials"), to deal in the Materials without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Materials, and to
+ * permit persons to whom the Materials are furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+ *
+ *//*!
+ * \file vktSparseResourcesBase.hpp
+ * \brief Sparse Resources Base Instance
+ *//*--------------------------------------------------------------------*/
+
+#include "tcuDefs.hpp"
+#include "tcuTestCase.hpp"
+#include "vktTestCaseUtil.hpp"
+
+#include "vkDefs.hpp"
+#include "vkMemUtil.hpp"
+#include "vkPlatform.hpp"
+#include "vkRef.hpp"
+#include "vkTypeUtil.hpp"
+
+#include "deUniquePtr.hpp"
+#include "deStringUtil.hpp"
+
+#include <map>
+#include <vector>
+
+namespace vkt
+{
+namespace sparse
+{
+
+enum
+{
+ NO_MATCH_FOUND = ~((deUint32)0)
+};
+
+struct Queue
+{
+ vk::VkQueue queueHandle;
+ deUint32 queueFamilyIndex;
+};
+
+struct QueueRequirements
+{
+ QueueRequirements(const vk::VkQueueFlags qFlags, const deUint32 qCount)
+ : queueFlags(qFlags)
+ , queueCount(qCount)
+ {}
+
+ vk::VkQueueFlags queueFlags;
+ deUint32 queueCount;
+};
+
+typedef std::vector<QueueRequirements> QueueRequirementsVec;
+
+class SparseResourcesBaseInstance : public TestInstance
+{
+public:
+ SparseResourcesBaseInstance (Context &context);
+
+protected:
+
+ typedef std::map<vk::VkQueueFlags, std::vector<Queue> > QueuesMap;
+ typedef std::vector<vk::VkQueueFamilyProperties> QueueFamilyPropertiesVec;
+ typedef vk::Move<vk::VkDevice> DevicePtr;
+
+ bool createDeviceSupportingQueues (const QueueRequirementsVec& queueRequirements);
+
+ const Queue& getQueue (const vk::VkQueueFlags queueFlags,
+ const deUint32 queueIndex);
+
+ deUint32 findMatchingMemoryType (const vk::VkPhysicalDeviceMemoryProperties& deviceMemoryProperties,
+ const vk::VkMemoryRequirements& objectMemoryRequirements,
+ const vk::MemoryRequirement& memoryRequirement) const;
+ DevicePtr m_logicalDevice;
+
+private:
+
+ deUint32 findMatchingQueueFamilyIndex (const QueueFamilyPropertiesVec& queueFamilyProperties,
+ const vk::VkQueueFlags queueFlags) const;
+ QueuesMap m_queues;
+};
+
+} // sparse
+} // vkt
+
+#endif // _VKTSPARSERESOURCESBASE_HPP
--- /dev/null
+/*------------------------------------------------------------------------
+ * Vulkan Conformance Tests
+ * ------------------------
+ *
+ * Copyright (c) 2016 The Khronos Group Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and/or associated documentation files (the
+ * "Materials"), to deal in the Materials without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Materials, and to
+ * permit persons to whom the Materials are furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+ *
+ *//*!
+ * \file vktSparseResourcesBufferSparseBinding.cpp
+ * \brief Buffer Sparse Binding tests
+ *//*--------------------------------------------------------------------*/
+
+#include "vktSparseResourcesBufferSparseBinding.hpp"
+#include "vktSparseResourcesTestsUtil.hpp"
+#include "vktSparseResourcesBase.hpp"
+#include "vktTestCaseUtil.hpp"
+
+#include "vkDefs.hpp"
+#include "vkRef.hpp"
+#include "vkRefUtil.hpp"
+#include "vkPlatform.hpp"
+#include "vkPrograms.hpp"
+#include "vkMemUtil.hpp"
+#include "vkBuilderUtil.hpp"
+#include "vkImageUtil.hpp"
+#include "vkQueryUtil.hpp"
+#include "vkTypeUtil.hpp"
+
+#include "deUniquePtr.hpp"
+#include "deStringUtil.hpp"
+
+#include <string>
+#include <vector>
+
+using namespace vk;
+
+namespace vkt
+{
+namespace sparse
+{
+namespace
+{
+
+class BufferSparseBindingCase : public TestCase
+{
+public:
+ BufferSparseBindingCase (tcu::TestContext& testCtx,
+ const std::string& name,
+ const std::string& description,
+ const deUint32 bufferSize);
+
+ TestInstance* createInstance (Context& context) const;
+
+private:
+ const deUint32 m_bufferSize;
+};
+
+BufferSparseBindingCase::BufferSparseBindingCase (tcu::TestContext& testCtx,
+ const std::string& name,
+ const std::string& description,
+ const deUint32 bufferSize)
+ : TestCase (testCtx, name, description)
+ , m_bufferSize (bufferSize)
+{
+}
+
+class BufferSparseBindingInstance : public SparseResourcesBaseInstance
+{
+public:
+ BufferSparseBindingInstance (Context& context,
+ const deUint32 bufferSize);
+
+ tcu::TestStatus iterate (void);
+
+private:
+ const deUint32 m_bufferSize;
+};
+
+BufferSparseBindingInstance::BufferSparseBindingInstance (Context& context,
+ const deUint32 bufferSize)
+
+ : SparseResourcesBaseInstance (context)
+ , m_bufferSize (bufferSize)
+{
+}
+
+tcu::TestStatus BufferSparseBindingInstance::iterate (void)
+{
+ const InstanceInterface& instance = m_context.getInstanceInterface();
+ const DeviceInterface& deviceInterface = m_context.getDeviceInterface();
+ const VkPhysicalDevice physicalDevice = m_context.getPhysicalDevice();
+
+ VkPhysicalDeviceFeatures deviceFeatures;
+ instance.getPhysicalDeviceFeatures(physicalDevice, &deviceFeatures);
+
+ if (deviceFeatures.sparseBinding == false)
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_NOT_SUPPORTED, "Sparse binding not supported");
+ }
+
+ VkPhysicalDeviceProperties deviceProperties;
+ instance.getPhysicalDeviceProperties(physicalDevice, &deviceProperties);
+
+ QueueRequirementsVec queueRequirements;
+ queueRequirements.push_back(QueueRequirements(VK_QUEUE_SPARSE_BINDING_BIT, 1u));
+ queueRequirements.push_back(QueueRequirements(VK_QUEUE_COMPUTE_BIT, 1u));
+
+ // Create logical device supporting both sparse and transfer queues
+ if (!createDeviceSupportingQueues(queueRequirements))
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_FAIL, "Could not create device supporting sparse and compute queue");
+ }
+
+ const VkPhysicalDeviceMemoryProperties deviceMemoryProperties = getPhysicalDeviceMemoryProperties(instance, physicalDevice);
+
+ // Create memory allocator for logical device
+ const de::UniquePtr<Allocator> allocator(new SimpleAllocator(deviceInterface, *m_logicalDevice, deviceMemoryProperties));
+
+ // Create queue supporting sparse binding operations
+ const Queue& sparseQueue = getQueue(VK_QUEUE_SPARSE_BINDING_BIT, 0);
+
+ // Create queue supporting compute and transfer operations
+ const Queue& computeQueue = getQueue(VK_QUEUE_COMPUTE_BIT, 0);
+
+ VkBufferCreateInfo bufferCreateInfo;
+
+ bufferCreateInfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO; // VkStructureType sType;
+ bufferCreateInfo.pNext = DE_NULL; // const void* pNext;
+ bufferCreateInfo.flags = VK_BUFFER_CREATE_SPARSE_BINDING_BIT; // VkBufferCreateFlags flags;
+ bufferCreateInfo.size = m_bufferSize; // VkDeviceSize size;
+ bufferCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT |
+ VK_BUFFER_USAGE_TRANSFER_DST_BIT; // VkBufferUsageFlags usage;
+ bufferCreateInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE; // VkSharingMode sharingMode;
+ bufferCreateInfo.queueFamilyIndexCount = 0u; // deUint32 queueFamilyIndexCount;
+ bufferCreateInfo.pQueueFamilyIndices = DE_NULL; // const deUint32* pQueueFamilyIndices;
+
+ const deUint32 queueFamilyIndices[] = { sparseQueue.queueFamilyIndex, computeQueue.queueFamilyIndex };
+
+ if (sparseQueue.queueFamilyIndex != computeQueue.queueFamilyIndex)
+ {
+ bufferCreateInfo.sharingMode = VK_SHARING_MODE_CONCURRENT; // VkSharingMode sharingMode;
+ bufferCreateInfo.queueFamilyIndexCount = 2u; // deUint32 queueFamilyIndexCount;
+ bufferCreateInfo.pQueueFamilyIndices = queueFamilyIndices; // const deUint32* pQueueFamilyIndices;
+ }
+
+ // Create sparse buffer
+ const Unique<VkBuffer> sparseBuffer(createBuffer(deviceInterface, *m_logicalDevice, &bufferCreateInfo));
+
+ const VkMemoryRequirements bufferMemRequirement = getBufferMemoryRequirements(deviceInterface, *m_logicalDevice, *sparseBuffer);
+
+ if (bufferMemRequirement.size > deviceProperties.limits.sparseAddressSpaceSize)
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_NOT_SUPPORTED, "Required memory size for sparse resources exceeds device limits");
+ }
+
+ DE_ASSERT((bufferMemRequirement.size % bufferMemRequirement.alignment) == 0);
+
+ const deUint32 numSparseBinds = static_cast<deUint32>(bufferMemRequirement.size / bufferMemRequirement.alignment);
+
+ typedef de::SharedPtr< Unique<VkDeviceMemory> > DeviceMemoryUniquePtr;
+
+ std::vector<VkSparseMemoryBind> sparseMemoryBinds;
+ std::vector<DeviceMemoryUniquePtr> deviceMemUniquePtrVec;
+ const deUint32 memoryType = findMatchingMemoryType(deviceMemoryProperties, bufferMemRequirement, MemoryRequirement::Any);
+
+ if (memoryType == NO_MATCH_FOUND)
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_FAIL, "No matching memory type found");
+ }
+
+ for (deUint32 sparseBindNdx = 0; sparseBindNdx < numSparseBinds; ++sparseBindNdx)
+ {
+ const VkMemoryAllocateInfo allocInfo =
+ {
+ VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ bufferMemRequirement.alignment, // VkDeviceSize allocationSize;
+ memoryType, // deUint32 memoryTypeIndex;
+ };
+
+ VkDeviceMemory deviceMemory = 0;
+ VK_CHECK(deviceInterface.allocateMemory(*m_logicalDevice, &allocInfo, DE_NULL, &deviceMemory));
+
+ deviceMemUniquePtrVec.push_back(makeVkSharedPtr(Move<VkDeviceMemory>(check<VkDeviceMemory>(deviceMemory), Deleter<VkDeviceMemory>(deviceInterface, *m_logicalDevice, DE_NULL))));
+
+ const VkSparseMemoryBind sparseMemoryBind = makeSparseMemoryBind
+ (
+ bufferMemRequirement.alignment * sparseBindNdx, //VkDeviceSize resourceOffset
+ bufferMemRequirement.alignment, //VkDeviceSize size
+ deviceMemory, //VkDeviceMemory memory
+ 0u, //VkDeviceSize memoryOffset
+ 0u //VkSparseMemoryBindFlags flags
+ );
+
+ sparseMemoryBinds.push_back(sparseMemoryBind);
+ }
+
+ const VkSparseBufferMemoryBindInfo sparseBufferBindInfo = makeSparseBufferMemoryBindInfo
+ (
+ *sparseBuffer, //VkBuffer buffer;
+ numSparseBinds, //deUint32 bindCount;
+ &sparseMemoryBinds[0] //const VkSparseMemoryBind* Binds;
+ );
+
+ const Unique<VkSemaphore> bufferMemoryBindSemaphore(makeSemaphore(deviceInterface, *m_logicalDevice));
+
+ const VkBindSparseInfo bindSparseInfo =
+ {
+ VK_STRUCTURE_TYPE_BIND_SPARSE_INFO, //VkStructureType sType;
+ DE_NULL, //const void* pNext;
+ 0u, //deUint32 waitSemaphoreCount;
+ DE_NULL, //const VkSemaphore* pWaitSemaphores;
+ 1u, //deUint32 bufferBindCount;
+ &sparseBufferBindInfo, //const VkSparseBufferMemoryBindInfo* pBufferBinds;
+ 0u, //deUint32 imageOpaqueBindCount;
+ DE_NULL, //const VkSparseImageOpaqueMemoryBindInfo* pImageOpaqueBinds;
+ 0u, //deUint32 imageBindCount;
+ DE_NULL, //const VkSparseImageMemoryBindInfo* pImageBinds;
+ 1u, //deUint32 signalSemaphoreCount;
+ &bufferMemoryBindSemaphore.get() //const VkSemaphore* pSignalSemaphores;
+ };
+
+ // Submit sparse bind commands for execution
+ VK_CHECK(deviceInterface.queueBindSparse(sparseQueue.queueHandle, 1u, &bindSparseInfo, DE_NULL));
+
+ // Create command buffer for transfer oparations
+ const Unique<VkCommandPool> commandPool(makeCommandPool(deviceInterface, *m_logicalDevice, computeQueue.queueFamilyIndex));
+ const Unique<VkCommandBuffer> commandBuffer(makeCommandBuffer(deviceInterface, *m_logicalDevice, *commandPool));
+
+ // Start recording transfer commands
+ beginCommandBuffer(deviceInterface, *commandBuffer);
+
+ const VkBufferCreateInfo inputBufferCreateInfo = makeBufferCreateInfo(m_bufferSize, VK_BUFFER_USAGE_TRANSFER_SRC_BIT);
+ const de::UniquePtr<Buffer> inputBuffer(new Buffer(deviceInterface, *m_logicalDevice, *allocator, inputBufferCreateInfo, MemoryRequirement::HostVisible));
+
+ std::vector<deUint8> referenceData;
+ referenceData.resize(m_bufferSize);
+
+ for (deUint32 valueNdx = 0; valueNdx < m_bufferSize; ++valueNdx)
+ {
+ referenceData[valueNdx] = static_cast<deUint8>((valueNdx % bufferMemRequirement.alignment) + 1u);
+ }
+
+ deMemcpy(inputBuffer->getAllocation().getHostPtr(), &referenceData[0], m_bufferSize);
+
+ flushMappedMemoryRange(deviceInterface, *m_logicalDevice, inputBuffer->getAllocation().getMemory(), inputBuffer->getAllocation().getOffset(), m_bufferSize);
+
+ const VkBufferMemoryBarrier inputBufferBarrier
+ = makeBufferMemoryBarrier( VK_ACCESS_HOST_WRITE_BIT,
+ VK_ACCESS_TRANSFER_READ_BIT,
+ inputBuffer->get(),
+ 0u,
+ m_bufferSize);
+
+ deviceInterface.cmdPipelineBarrier(*commandBuffer, VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0u, 0u, DE_NULL, 1u, &inputBufferBarrier, 0u, DE_NULL);
+
+ const VkBufferCopy bufferCopy = makeBufferCopy(0u, 0u, m_bufferSize);
+
+ deviceInterface.cmdCopyBuffer(*commandBuffer, inputBuffer->get(), *sparseBuffer, 1u, &bufferCopy);
+
+ const VkBufferMemoryBarrier sparseBufferBarrier
+ = makeBufferMemoryBarrier( VK_ACCESS_TRANSFER_WRITE_BIT,
+ VK_ACCESS_TRANSFER_READ_BIT,
+ *sparseBuffer,
+ 0u,
+ m_bufferSize);
+
+ deviceInterface.cmdPipelineBarrier(*commandBuffer, VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0u, 0u, DE_NULL, 1u, &sparseBufferBarrier, 0u, DE_NULL);
+
+ const VkBufferCreateInfo outputBufferCreateInfo = makeBufferCreateInfo(m_bufferSize, VK_BUFFER_USAGE_TRANSFER_DST_BIT);
+ const de::UniquePtr<Buffer> outputBuffer(new Buffer(deviceInterface, *m_logicalDevice, *allocator, outputBufferCreateInfo, MemoryRequirement::HostVisible));
+
+ deviceInterface.cmdCopyBuffer(*commandBuffer, *sparseBuffer, outputBuffer->get(), 1u, &bufferCopy);
+
+ const VkBufferMemoryBarrier outputBufferBarrier
+ = makeBufferMemoryBarrier( VK_ACCESS_TRANSFER_WRITE_BIT,
+ VK_ACCESS_HOST_READ_BIT,
+ outputBuffer->get(),
+ 0u,
+ m_bufferSize);
+
+ deviceInterface.cmdPipelineBarrier(*commandBuffer, VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_HOST_BIT, 0u, 0u, DE_NULL, 1u, &outputBufferBarrier, 0u, DE_NULL);
+
+ // End recording transfer commands
+ endCommandBuffer(deviceInterface, *commandBuffer);
+
+ const VkPipelineStageFlags waitStageBits[] = { VK_PIPELINE_STAGE_TRANSFER_BIT };
+
+ // Submit transfer commands for execution and wait for completion
+ submitCommandsAndWait(deviceInterface, *m_logicalDevice, computeQueue.queueHandle, *commandBuffer, 1u, &bufferMemoryBindSemaphore.get(), waitStageBits);
+
+ // Retrieve data from output buffer to host memory
+ const Allocation& allocation = outputBuffer->getAllocation();
+
+ invalidateMappedMemoryRange(deviceInterface, *m_logicalDevice, allocation.getMemory(), allocation.getOffset(), m_bufferSize);
+
+ const deUint8* outputData = static_cast<const deUint8*>(allocation.getHostPtr());
+ tcu::TestStatus testStatus = tcu::TestStatus::incomplete();
+
+ // Compare output data with reference data
+ if (deMemCmp(&referenceData[0], outputData, m_bufferSize) == 0)
+ testStatus = tcu::TestStatus::pass("Passed");
+ else
+ testStatus = tcu::TestStatus::fail("Failed");
+
+ // Wait for sparse queue to become idle
+ deviceInterface.queueWaitIdle(sparseQueue.queueHandle);
+
+ return testStatus;
+}
+
+TestInstance* BufferSparseBindingCase::createInstance (Context& context) const
+{
+ return new BufferSparseBindingInstance(context, m_bufferSize);
+}
+
+} // anonymous ns
+
+tcu::TestCaseGroup* createBufferSparseBindingTests (tcu::TestContext& testCtx)
+{
+ de::MovePtr<tcu::TestCaseGroup> testGroup(new tcu::TestCaseGroup(testCtx, "buffer_sparse_binding", "Buffer Sparse Binding"));
+
+ testGroup->addChild(new BufferSparseBindingCase(testCtx, "buffer_size_2_10", "", 1 << 10));
+ testGroup->addChild(new BufferSparseBindingCase(testCtx, "buffer_size_2_12", "", 1 << 12));
+ testGroup->addChild(new BufferSparseBindingCase(testCtx, "buffer_size_2_16", "", 1 << 16));
+ testGroup->addChild(new BufferSparseBindingCase(testCtx, "buffer_size_2_17", "", 1 << 17));
+ testGroup->addChild(new BufferSparseBindingCase(testCtx, "buffer_size_2_20", "", 1 << 20));
+ testGroup->addChild(new BufferSparseBindingCase(testCtx, "buffer_size_2_24", "", 1 << 24));
+
+ return testGroup.release();
+}
+
+} // sparse
+} // vkt
--- /dev/null
+#ifndef _VKTSPARSERESOURCESBUFFERSPARSEBINDING_HPP
+#define _VKTSPARSERESOURCESBUFFERSPARSEBINDING_HPP
+/*------------------------------------------------------------------------
+ * Vulkan Conformance Tests
+ * ------------------------
+ *
+ * Copyright (c) 2016 The Khronos Group Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and/or associated documentation files (the
+ * "Materials"), to deal in the Materials without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Materials, and to
+ * permit persons to whom the Materials are furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+ *
+ *//*!
+ * \file vktSparseResourcesBufferSparseBinding.hpp
+ * \brief Buffer Sparse Binding tests
+ *//*--------------------------------------------------------------------*/
+
+#include "tcuDefs.hpp"
+#include "vktTestCase.hpp"
+
+namespace vkt
+{
+namespace sparse
+{
+
+tcu::TestCaseGroup* createBufferSparseBindingTests (tcu::TestContext& testCtx);
+
+} // sparse
+} // vkt
+
+#endif // _VKTSPARSERESOURCESBUFFERSPARSEBINDING_HPP
--- /dev/null
+/*------------------------------------------------------------------------
+* Vulkan Conformance Tests
+* ------------------------
+*
+* Copyright (c) 2016 The Khronos Group Inc.
+*
+* Permission is hereby granted, free of charge, to any person obtaining a
+* copy of this software and/or associated documentation files (the
+* "Materials"), to deal in the Materials without restriction, including
+* without limitation the rights to use, copy, modify, merge, publish,
+* distribute, sublicense, and/or sell copies of the Materials, and to
+* permit persons to whom the Materials are furnished to do so, subject to
+* the following conditions:
+*
+* The above copyright notice(s) and this permission notice shall be included
+* in all copies or substantial portions of the Materials.
+*
+* THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+* MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+*
+*//*!
+* \file vktSparseResourcesImageSparseBinding.cpp
+* \brief Sparse fully resident images with mipmaps tests
+*//*--------------------------------------------------------------------*/
+
+#include "vktSparseResourcesBufferSparseBinding.hpp"
+#include "vktSparseResourcesTestsUtil.hpp"
+#include "vktSparseResourcesBase.hpp"
+#include "vktTestCaseUtil.hpp"
+
+#include "vkDefs.hpp"
+#include "vkRef.hpp"
+#include "vkRefUtil.hpp"
+#include "vkPlatform.hpp"
+#include "vkPrograms.hpp"
+#include "vkMemUtil.hpp"
+#include "vkBuilderUtil.hpp"
+#include "vkImageUtil.hpp"
+#include "vkQueryUtil.hpp"
+#include "vkTypeUtil.hpp"
+
+#include "deUniquePtr.hpp"
+#include "deStringUtil.hpp"
+
+#include <string>
+#include <vector>
+
+using namespace vk;
+
+namespace vkt
+{
+namespace sparse
+{
+namespace
+{
+
+class ImageSparseBindingCase : public TestCase
+{
+public:
+ ImageSparseBindingCase (tcu::TestContext& testCtx,
+ const std::string& name,
+ const std::string& description,
+ const ImageType imageType,
+ const tcu::UVec3& imageSize,
+ const tcu::TextureFormat& format);
+
+ TestInstance* createInstance (Context& context) const;
+
+private:
+ const ImageType m_imageType;
+ const tcu::UVec3 m_imageSize;
+ const tcu::TextureFormat m_format;
+};
+
+ImageSparseBindingCase::ImageSparseBindingCase (tcu::TestContext& testCtx,
+ const std::string& name,
+ const std::string& description,
+ const ImageType imageType,
+ const tcu::UVec3& imageSize,
+ const tcu::TextureFormat& format)
+ : TestCase (testCtx, name, description)
+ , m_imageType (imageType)
+ , m_imageSize (imageSize)
+ , m_format (format)
+{
+}
+
+class ImageSparseBindingInstance : public SparseResourcesBaseInstance
+{
+public:
+ ImageSparseBindingInstance (Context& context,
+ const ImageType imageType,
+ const tcu::UVec3& imageSize,
+ const tcu::TextureFormat& format);
+
+ tcu::TestStatus iterate (void);
+
+private:
+ const ImageType m_imageType;
+ const tcu::UVec3 m_imageSize;
+ const tcu::TextureFormat m_format;
+};
+
+ImageSparseBindingInstance::ImageSparseBindingInstance (Context& context,
+ const ImageType imageType,
+ const tcu::UVec3& imageSize,
+ const tcu::TextureFormat& format)
+ : SparseResourcesBaseInstance (context)
+ , m_imageType (imageType)
+ , m_imageSize (imageSize)
+ , m_format (format)
+{
+}
+
+tcu::TestStatus ImageSparseBindingInstance::iterate (void)
+{
+ const InstanceInterface& instance = m_context.getInstanceInterface();
+ const DeviceInterface& deviceInterface = m_context.getDeviceInterface();
+ const VkPhysicalDevice physicalDevice = m_context.getPhysicalDevice();
+
+ // Check if device supports sparse binding
+ const VkPhysicalDeviceFeatures deviceFeatures = getPhysicalDeviceFeatures(instance, physicalDevice);
+
+ if (deviceFeatures.sparseBinding == false)
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_NOT_SUPPORTED, "Device does not support sparse binding");
+ }
+
+ // Check if image size does not exceed device limits
+ const VkPhysicalDeviceProperties deviceProperties = getPhysicalDeviceProperties(instance, physicalDevice);
+
+ if (isImageSizeSupported(m_imageType, m_imageSize, deviceProperties.limits) == false)
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_NOT_SUPPORTED, "Image size not supported for device");
+ }
+
+ QueueRequirementsVec queueRequirements;
+ queueRequirements.push_back(QueueRequirements(VK_QUEUE_SPARSE_BINDING_BIT, 1u));
+ queueRequirements.push_back(QueueRequirements(VK_QUEUE_COMPUTE_BIT, 1u));
+
+ // Create logical device supporting both sparse and compute queues
+ if (!createDeviceSupportingQueues(queueRequirements))
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_FAIL, "Could not create device supporting sparse and compute queue");
+ }
+
+ const VkPhysicalDeviceMemoryProperties deviceMemoryProperties = getPhysicalDeviceMemoryProperties(instance, physicalDevice);
+
+ // Create memory allocator for logical device
+ const de::UniquePtr<Allocator> allocator(new SimpleAllocator(deviceInterface, *m_logicalDevice, deviceMemoryProperties));
+
+ // Create queue supporting sparse binding operations
+ const Queue& sparseQueue = getQueue(VK_QUEUE_SPARSE_BINDING_BIT, 0);
+
+ // Create queue supporting compute and transfer operations
+ const Queue& computeQueue = getQueue(VK_QUEUE_COMPUTE_BIT, 0);
+
+ VkImageCreateInfo imageSparseInfo;
+
+ imageSparseInfo.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO; //VkStructureType sType;
+ imageSparseInfo.pNext = DE_NULL; //const void* pNext;
+ imageSparseInfo.flags = VK_IMAGE_CREATE_SPARSE_BINDING_BIT; //VkImageCreateFlags flags;
+ imageSparseInfo.imageType = mapImageType(m_imageType); //VkImageType imageType;
+ imageSparseInfo.format = mapTextureFormat(m_format); //VkFormat format;
+ imageSparseInfo.extent = makeExtent3D(getLayerSize(m_imageType, m_imageSize)); //VkExtent3D extent;
+ imageSparseInfo.arrayLayers = getNumLayers(m_imageType, m_imageSize); //deUint32 arrayLayers;
+ imageSparseInfo.samples = VK_SAMPLE_COUNT_1_BIT; //VkSampleCountFlagBits samples;
+ imageSparseInfo.tiling = VK_IMAGE_TILING_OPTIMAL; //VkImageTiling tiling;
+ imageSparseInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED; //VkImageLayout initialLayout;
+ imageSparseInfo.usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT |
+ VK_IMAGE_USAGE_TRANSFER_DST_BIT; //VkImageUsageFlags usage;
+ imageSparseInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE; //VkSharingMode sharingMode;
+ imageSparseInfo.queueFamilyIndexCount = 0u; //deUint32 queueFamilyIndexCount;
+ imageSparseInfo.pQueueFamilyIndices = DE_NULL; //const deUint32* pQueueFamilyIndices;
+
+ if (m_imageType == IMAGE_TYPE_CUBE || m_imageType == IMAGE_TYPE_CUBE_ARRAY)
+ {
+ imageSparseInfo.flags |= VK_IMAGE_CREATE_CUBE_COMPATIBLE_BIT;
+ }
+
+ VkImageFormatProperties imageFormatProperties;
+ instance.getPhysicalDeviceImageFormatProperties(physicalDevice,
+ imageSparseInfo.format,
+ imageSparseInfo.imageType,
+ imageSparseInfo.tiling,
+ imageSparseInfo.usage,
+ imageSparseInfo.flags,
+ &imageFormatProperties);
+
+ imageSparseInfo.mipLevels = getImageMaxMipLevels(imageFormatProperties, imageSparseInfo);
+
+ // Allow sharing of sparse image by two different queue families (if necessary)
+ const deUint32 queueFamilyIndices[] = { sparseQueue.queueFamilyIndex, computeQueue.queueFamilyIndex };
+
+ if (sparseQueue.queueFamilyIndex != computeQueue.queueFamilyIndex)
+ {
+ imageSparseInfo.sharingMode = VK_SHARING_MODE_CONCURRENT; //VkSharingMode sharingMode;
+ imageSparseInfo.queueFamilyIndexCount = 2u; //deUint32 queueFamilyIndexCount;
+ imageSparseInfo.pQueueFamilyIndices = queueFamilyIndices; //const deUint32* pQueueFamilyIndices;
+ }
+
+ // Create sparse image
+ const Unique<VkImage> imageSparse(createImage(deviceInterface, *m_logicalDevice, &imageSparseInfo));
+
+ // Get sparse image general memory requirements
+ const VkMemoryRequirements imageSparseMemRequirements = getImageMemoryRequirements(deviceInterface, *m_logicalDevice, *imageSparse);
+
+ // Check if required image memory size does not exceed device limits
+ if (imageSparseMemRequirements.size > deviceProperties.limits.sparseAddressSpaceSize)
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_NOT_SUPPORTED, "Required memory size for sparse resource exceeds device limits");
+ }
+
+ DE_ASSERT((imageSparseMemRequirements.size % imageSparseMemRequirements.alignment) == 0);
+
+ typedef de::SharedPtr< Unique<VkDeviceMemory> > DeviceMemoryUniquePtr;
+
+ std::vector<VkSparseMemoryBind> sparseMemoryBinds;
+ std::vector<DeviceMemoryUniquePtr> deviceMemUniquePtrVec;
+ const deUint32 numSparseBinds = static_cast<deUint32>(imageSparseMemRequirements.size / imageSparseMemRequirements.alignment);
+ const deUint32 memoryType = findMatchingMemoryType(deviceMemoryProperties, imageSparseMemRequirements, MemoryRequirement::Any);
+
+ if (memoryType == NO_MATCH_FOUND)
+ {
+ return tcu::TestStatus(QP_TEST_RESULT_FAIL, "No matching memory type found");
+ }
+
+ for (deUint32 sparseBindNdx = 0; sparseBindNdx < numSparseBinds; ++sparseBindNdx)
+ {
+ const VkMemoryAllocateInfo allocInfo =
+ {
+ VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ imageSparseMemRequirements.alignment, // VkDeviceSize allocationSize;
+ memoryType, // deUint32 memoryTypeIndex;
+ };
+
+ VkDeviceMemory deviceMemory = 0;
+ VK_CHECK(deviceInterface.allocateMemory(*m_logicalDevice, &allocInfo, DE_NULL, &deviceMemory));
+
+ deviceMemUniquePtrVec.push_back(makeVkSharedPtr(Move<VkDeviceMemory>(check<VkDeviceMemory>(deviceMemory), Deleter<VkDeviceMemory>(deviceInterface, *m_logicalDevice, DE_NULL))));
+
+ const VkSparseMemoryBind sparseMemoryBind = makeSparseMemoryBind
+ (
+ imageSparseMemRequirements.alignment * sparseBindNdx, //VkDeviceSize resourceOffset
+ imageSparseMemRequirements.alignment, //VkDeviceSize size
+ deviceMemory, //VkDeviceMemory memory;
+ 0u, //VkDeviceSize memoryOffset;
+ 0u //VkSparseMemoryBindFlags flags;
+ );
+
+ sparseMemoryBinds.push_back(sparseMemoryBind);
+ }
+
+ const VkSparseImageOpaqueMemoryBindInfo opaqueBindInfo = makeSparseImageOpaqueMemoryBindInfo
+ (
+ *imageSparse, // VkImage image
+ numSparseBinds, // deUint32 bindCount
+ &sparseMemoryBinds[0] // const VkSparseMemoryBind* pBinds
+ );
+
+ const Unique<VkSemaphore> imageMemoryBindSemaphore(makeSemaphore(deviceInterface, *m_logicalDevice));
+
+ const VkBindSparseInfo bindSparseInfo =
+ {
+ VK_STRUCTURE_TYPE_BIND_SPARSE_INFO, //VkStructureType sType;
+ DE_NULL, //const void* pNext;
+ 0u, //deUint32 waitSemaphoreCount;
+ DE_NULL, //const VkSemaphore* pWaitSemaphores;
+ 0u, //deUint32 bufferBindCount;
+ DE_NULL, //const VkSparseBufferMemoryBindInfo* pBufferBinds;
+ 1u, //deUint32 imageOpaqueBindCount;
+ &opaqueBindInfo, //const VkSparseImageOpaqueMemoryBindInfo* pImageOpaqueBinds;
+ 0u, //deUint32 imageBindCount;
+ DE_NULL, //const VkSparseImageMemoryBindInfo* pImageBinds;
+ 1u, //deUint32 signalSemaphoreCount;
+ &imageMemoryBindSemaphore.get() //const VkSemaphore* pSignalSemaphores;
+ };
+
+ // Submit sparse bind commands for execution
+ VK_CHECK(deviceInterface.queueBindSparse(sparseQueue.queueHandle, 1u, &bindSparseInfo, DE_NULL));
+
+ // Create command buffer for compute and transfer oparations
+ const Unique<VkCommandPool> commandPool(makeCommandPool(deviceInterface, *m_logicalDevice, computeQueue.queueFamilyIndex));
+ const Unique<VkCommandBuffer> commandBuffer(makeCommandBuffer(deviceInterface, *m_logicalDevice, *commandPool));
+
+ // Start recording commands
+ beginCommandBuffer(deviceInterface, *commandBuffer);
+
+ const deUint32 imageSizeInBytes = getImageSizeInBytes(imageSparseInfo.extent, imageSparseInfo.arrayLayers, m_format, imageSparseInfo.mipLevels);
+ const VkBufferCreateInfo inputBufferCreateInfo = makeBufferCreateInfo(imageSizeInBytes, VK_BUFFER_USAGE_TRANSFER_SRC_BIT);
+
+ const de::UniquePtr<Buffer> inputBuffer(new Buffer(deviceInterface, *m_logicalDevice, *allocator, inputBufferCreateInfo, MemoryRequirement::HostVisible));
+
+ std::vector<deUint8> referenceData;
+ referenceData.resize(imageSizeInBytes);
+
+ for (deUint32 valueNdx = 0; valueNdx < imageSizeInBytes; ++valueNdx)
+ {
+ referenceData[valueNdx] = static_cast<deUint8>((valueNdx % imageSparseMemRequirements.alignment) + 1u);
+ }
+
+ deMemcpy(inputBuffer->getAllocation().getHostPtr(), &referenceData[0], imageSizeInBytes);
+
+ flushMappedMemoryRange(deviceInterface, *m_logicalDevice, inputBuffer->getAllocation().getMemory(), inputBuffer->getAllocation().getOffset(), imageSizeInBytes);
+
+ const VkBufferMemoryBarrier inputBufferBarrier
+ = makeBufferMemoryBarrier(
+ VK_ACCESS_HOST_WRITE_BIT,
+ VK_ACCESS_TRANSFER_READ_BIT,
+ inputBuffer->get(),
+ 0u,
+ imageSizeInBytes);
+
+ const VkImageSubresourceRange fullImageSubresourceRange = makeImageSubresourceRange(VK_IMAGE_ASPECT_COLOR_BIT, 0u, imageSparseInfo.mipLevels, 0u, imageSparseInfo.arrayLayers);
+
+ const VkImageMemoryBarrier imageSparseTransferDstBarrier
+ = makeImageMemoryBarrier(
+ 0u,
+ VK_ACCESS_TRANSFER_WRITE_BIT,
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
+ *imageSparse,
+ fullImageSubresourceRange);
+
+ deviceInterface.cmdPipelineBarrier(*commandBuffer, VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0u, 0u, DE_NULL, 1u, &inputBufferBarrier, 1u, &imageSparseTransferDstBarrier);
+
+ std::vector <VkBufferImageCopy> bufferImageCopy;
+ bufferImageCopy.resize(imageSparseInfo.mipLevels);
+
+ VkDeviceSize bufferOffset = 0;
+ for (deUint32 mipmapNdx = 0; mipmapNdx < imageSparseInfo.mipLevels; mipmapNdx++)
+ {
+ bufferImageCopy[mipmapNdx] = makeBufferImageCopy(mipLevelExtents(imageSparseInfo.extent, mipmapNdx), imageSparseInfo.arrayLayers, mipmapNdx, bufferOffset);
+
+ bufferOffset += getImageMipLevelSizeInBytes(imageSparseInfo.extent, imageSparseInfo.arrayLayers, m_format, mipmapNdx);
+ }
+
+ deviceInterface.cmdCopyBufferToImage(*commandBuffer, inputBuffer->get(), *imageSparse, VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, static_cast<deUint32>(bufferImageCopy.size()), &bufferImageCopy[0]);
+
+ const VkImageMemoryBarrier imageSparseTransferSrcBarrier
+ = makeImageMemoryBarrier(
+ VK_ACCESS_TRANSFER_WRITE_BIT,
+ VK_ACCESS_TRANSFER_READ_BIT,
+ VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
+ VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
+ *imageSparse,
+ fullImageSubresourceRange);
+
+ deviceInterface.cmdPipelineBarrier(*commandBuffer, VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT, 0u, 0u, DE_NULL, 0u, DE_NULL, 1u, &imageSparseTransferSrcBarrier);
+
+ const VkBufferCreateInfo outputBufferCreateInfo = makeBufferCreateInfo(imageSizeInBytes, VK_BUFFER_USAGE_TRANSFER_DST_BIT);
+ const de::UniquePtr<Buffer> outputBuffer(new Buffer(deviceInterface, *m_logicalDevice, *allocator, outputBufferCreateInfo, MemoryRequirement::HostVisible));
+
+ deviceInterface.cmdCopyImageToBuffer(*commandBuffer, *imageSparse, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, outputBuffer->get(), static_cast<deUint32>(bufferImageCopy.size()), &bufferImageCopy[0]);
+
+ const VkBufferMemoryBarrier outputBufferBarrier
+ = makeBufferMemoryBarrier(
+ VK_ACCESS_TRANSFER_WRITE_BIT,
+ VK_ACCESS_HOST_READ_BIT,
+ outputBuffer->get(),
+ 0u,
+ imageSizeInBytes);
+
+ deviceInterface.cmdPipelineBarrier(*commandBuffer, VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_HOST_BIT, 0u, 0u, DE_NULL, 1u, &outputBufferBarrier, 0u, DE_NULL);
+
+ // End recording commands
+ endCommandBuffer(deviceInterface, *commandBuffer);
+
+ const VkPipelineStageFlags stageBits[] = { VK_PIPELINE_STAGE_TRANSFER_BIT };
+
+ // Submit commands for execution and wait for completion
+ submitCommandsAndWait(deviceInterface, *m_logicalDevice, computeQueue.queueHandle, *commandBuffer, 1u, &imageMemoryBindSemaphore.get(), stageBits);
+
+ // Retrieve data from buffer to host memory
+ const Allocation& allocation = outputBuffer->getAllocation();
+
+ invalidateMappedMemoryRange(deviceInterface, *m_logicalDevice, allocation.getMemory(), allocation.getOffset(), imageSizeInBytes);
+
+ const deUint8* outputData = static_cast<const deUint8*>(allocation.getHostPtr());
+ tcu::TestStatus testStatus = tcu::TestStatus::pass("Passed");
+
+ if (deMemCmp(outputData, &referenceData[0], imageSizeInBytes) != 0)
+ {
+ testStatus = tcu::TestStatus::fail("Failed");
+ }
+
+ // Wait for sparse queue to become idle
+ deviceInterface.queueWaitIdle(sparseQueue.queueHandle);
+
+ return testStatus;
+}
+
+TestInstance* ImageSparseBindingCase::createInstance (Context& context) const
+{
+ return new ImageSparseBindingInstance(context, m_imageType, m_imageSize, m_format);
+}
+
+} // anonymous ns
+
+tcu::TestCaseGroup* createImageSparseBindingTests(tcu::TestContext& testCtx)
+{
+ de::MovePtr<tcu::TestCaseGroup> testGroup(new tcu::TestCaseGroup(testCtx, "image_sparse_binding", "Buffer Sparse Binding"));
+
+ static const deUint32 sizeCountPerImageType = 3u;
+
+ struct ImageParameters
+ {
+ ImageType imageType;
+ tcu::UVec3 imageSizes[sizeCountPerImageType];
+ };
+
+ static const ImageParameters imageParametersArray[] =
+ {
+ { IMAGE_TYPE_1D, { tcu::UVec3(512u, 1u, 1u ), tcu::UVec3(1024u, 1u, 1u), tcu::UVec3(11u, 1u, 1u) } },
+ { IMAGE_TYPE_1D_ARRAY, { tcu::UVec3(512u, 1u, 64u), tcu::UVec3(1024u, 1u, 8u), tcu::UVec3(11u, 1u, 3u) } },
+ { IMAGE_TYPE_2D, { tcu::UVec3(512u, 256u, 1u ), tcu::UVec3(1024u, 128u, 1u), tcu::UVec3(11u, 137u, 1u) } },
+ { IMAGE_TYPE_2D_ARRAY, { tcu::UVec3(512u, 256u, 6u ), tcu::UVec3(1024u, 128u, 8u), tcu::UVec3(11u, 137u, 3u) } },
+ { IMAGE_TYPE_3D, { tcu::UVec3(512u, 256u, 6u ), tcu::UVec3(1024u, 128u, 8u), tcu::UVec3(11u, 137u, 3u) } },
+ { IMAGE_TYPE_CUBE, { tcu::UVec3(512u, 256u, 1u ), tcu::UVec3(1024u, 128u, 1u), tcu::UVec3(11u, 137u, 1u) } },
+ { IMAGE_TYPE_CUBE_ARRAY,{ tcu::UVec3(512u, 256u, 6u ), tcu::UVec3(1024u, 128u, 8u), tcu::UVec3(11u, 137u, 3u) } }
+ };
+
+ static const tcu::TextureFormat formats[] =
+ {
+ tcu::TextureFormat(tcu::TextureFormat::R, tcu::TextureFormat::SIGNED_INT32),
+ tcu::TextureFormat(tcu::TextureFormat::R, tcu::TextureFormat::SIGNED_INT16),
+ tcu::TextureFormat(tcu::TextureFormat::R, tcu::TextureFormat::SIGNED_INT8),
+ tcu::TextureFormat(tcu::TextureFormat::RGBA, tcu::TextureFormat::UNSIGNED_INT32),
+ tcu::TextureFormat(tcu::TextureFormat::RGBA, tcu::TextureFormat::UNSIGNED_INT16),
+ tcu::TextureFormat(tcu::TextureFormat::RGBA, tcu::TextureFormat::UNSIGNED_INT8)
+ };
+
+ for (deInt32 imageTypeNdx = 0; imageTypeNdx < DE_LENGTH_OF_ARRAY(imageParametersArray); ++imageTypeNdx)
+ {
+ const ImageType imageType = imageParametersArray[imageTypeNdx].imageType;
+ de::MovePtr<tcu::TestCaseGroup> imageTypeGroup(new tcu::TestCaseGroup(testCtx, getImageTypeName(imageType).c_str(), ""));
+
+ for (deInt32 formatNdx = 0; formatNdx < DE_LENGTH_OF_ARRAY(formats); ++formatNdx)
+ {
+ const tcu::TextureFormat& format = formats[formatNdx];
+ de::MovePtr<tcu::TestCaseGroup> formatGroup(new tcu::TestCaseGroup(testCtx, getShaderImageFormatQualifier(format).c_str(), ""));
+
+ for (deInt32 imageSizeNdx = 0; imageSizeNdx < DE_LENGTH_OF_ARRAY(imageParametersArray[imageTypeNdx].imageSizes); ++imageSizeNdx)
+ {
+ const tcu::UVec3 imageSize = imageParametersArray[imageTypeNdx].imageSizes[imageSizeNdx];
+
+ std::ostringstream stream;
+ stream << imageSize.x() << "_" << imageSize.y() << "_" << imageSize.z();
+
+ formatGroup->addChild(new ImageSparseBindingCase(testCtx, stream.str(), "", imageType, imageSize, format));
+ }
+ imageTypeGroup->addChild(formatGroup.release());
+ }
+ testGroup->addChild(imageTypeGroup.release());
+ }
+
+ return testGroup.release();
+}
+
+} // sparse
+} // vkt
--- /dev/null
+#ifndef _VKTSPARSERESOURCESIMAGESPARSEBINDING_HPP
+#define _VKTSPARSERESOURCESIMAGESPARSEBINDING_HPP
+/*------------------------------------------------------------------------
+* Vulkan Conformance Tests
+* ------------------------
+*
+* Copyright (c) 2016 The Khronos Group Inc.
+*
+* Permission is hereby granted, free of charge, to any person obtaining a
+* copy of this software and/or associated documentation files (the
+* "Materials"), to deal in the Materials without restriction, including
+* without limitation the rights to use, copy, modify, merge, publish,
+* distribute, sublicense, and/or sell copies of the Materials, and to
+* permit persons to whom the Materials are furnished to do so, subject to
+* the following conditions:
+*
+* The above copyright notice(s) and this permission notice shall be included
+* in all copies or substantial portions of the Materials.
+*
+* THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+* MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+*
+*//*!
+* \file vktSparseResourcesImageSparseBinding.hpp
+* \brief Sparse fully resident images with mipmaps tests
+*//*--------------------------------------------------------------------*/
+
+#include "tcuDefs.hpp"
+#include "vktTestCase.hpp"
+
+namespace vkt
+{
+namespace sparse
+{
+
+tcu::TestCaseGroup* createImageSparseBindingTests(tcu::TestContext& testCtx);
+
+} // sparse
+} // vkt
+
+#endif // _VKTSPARSERESOURCESIMAGESPARSEBINDING_HPP
--- /dev/null
+/*------------------------------------------------------------------------
+ * Vulkan Conformance Tests
+ * ------------------------
+ *
+ * Copyright (c) 2016 The Khronos Group Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and/or associated documentation files (the
+ * "Materials"), to deal in the Materials without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Materials, and to
+ * permit persons to whom the Materials are furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+ *
+ *//*!
+ * \file vktSparseResourcesTests.cpp
+ * \brief Sparse Resources Tests
+ *//*--------------------------------------------------------------------*/
+
+#include "vktSparseResourcesTests.hpp"
+#include "vktSparseResourcesBufferSparseBinding.hpp"
+#include "vktSparseResourcesImageSparseBinding.hpp"
+#include "deUniquePtr.hpp"
+
+namespace vkt
+{
+namespace sparse
+{
+
+tcu::TestCaseGroup* createTests (tcu::TestContext& testCtx)
+{
+ de::MovePtr<tcu::TestCaseGroup> sparseTests (new tcu::TestCaseGroup(testCtx, "sparse_resources", "Sparse Resources Tests"));
+
+ sparseTests->addChild(createBufferSparseBindingTests(testCtx));
+ sparseTests->addChild(createImageSparseBindingTests(testCtx));
+
+ return sparseTests.release();
+}
+
+} // sparse
+} // vkt
--- /dev/null
+#ifndef _VKTSPARSERESOURCESTESTS_HPP
+#define _VKTSPARSERESOURCESTESTS_HPP
+/*------------------------------------------------------------------------
+ * Vulkan Conformance Tests
+ * ------------------------
+ *
+ * Copyright (c) 2016 The Khronos Group Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and/or associated documentation files (the
+ * "Materials"), to deal in the Materials without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Materials, and to
+ * permit persons to whom the Materials are furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+ *
+ *//*!
+ * \file vktSparseResourcesTests.hpp
+ * \brief Sparse Resources Tests
+ *//*--------------------------------------------------------------------*/
+
+#include "tcuDefs.hpp"
+#include "tcuTestCase.hpp"
+
+namespace vkt
+{
+namespace sparse
+{
+
+tcu::TestCaseGroup* createTests (tcu::TestContext& testCtx);
+
+} // sparse
+} // vkt
+
+#endif // _VKTSPARSERESOURCESTESTS_HPP
--- /dev/null
+/*------------------------------------------------------------------------
+ * Vulkan Conformance Tests
+ * ------------------------
+ *
+ * Copyright (c) 2016 The Khronos Group Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and/or associated documentation files (the
+ * "Materials"), to deal in the Materials without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Materials, and to
+ * permit persons to whom the Materials are furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+ *
+ *//*!
+ * \file vktSparseResourcesTestsUtil.cpp
+ * \brief Sparse Resources Tests Utility Classes
+ *//*--------------------------------------------------------------------*/
+
+#include "vktSparseResourcesTestsUtil.hpp"
+#include "vkQueryUtil.hpp"
+#include "vkTypeUtil.hpp"
+#include "tcuTextureUtil.hpp"
+
+#include <deMath.h>
+
+using namespace vk;
+
+namespace vkt
+{
+namespace sparse
+{
+
+Buffer::Buffer (const DeviceInterface& vk,
+ const VkDevice device,
+ Allocator& allocator,
+ const VkBufferCreateInfo& bufferCreateInfo,
+ const MemoryRequirement memoryRequirement)
+ : m_buffer (createBuffer(vk, device, &bufferCreateInfo))
+ , m_allocation (allocator.allocate(getBufferMemoryRequirements(vk, device, *m_buffer), memoryRequirement))
+{
+ VK_CHECK(vk.bindBufferMemory(device, *m_buffer, m_allocation->getMemory(), m_allocation->getOffset()));
+}
+
+Image::Image (const DeviceInterface& vk,
+ const VkDevice device,
+ Allocator& allocator,
+ const VkImageCreateInfo& imageCreateInfo,
+ const MemoryRequirement memoryRequirement)
+ : m_image (createImage(vk, device, &imageCreateInfo))
+ , m_allocation (allocator.allocate(getImageMemoryRequirements(vk, device, *m_image), memoryRequirement))
+{
+ VK_CHECK(vk.bindImageMemory(device, *m_image, m_allocation->getMemory(), m_allocation->getOffset()));
+}
+
+tcu::UVec3 getShaderGridSize(const ImageType imageType, const tcu::UVec3& imageSize, const deUint32 mipLevel)
+{
+ const deUint32 mipLevelX = std::max(imageSize.x() >> mipLevel, 1u);
+ const deUint32 mipLevelY = std::max(imageSize.y() >> mipLevel, 1u);
+ const deUint32 mipLevelZ = std::max(imageSize.z() >> mipLevel, 1u);
+
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D:
+ return tcu::UVec3(mipLevelX, 1u, 1u);
+
+ case IMAGE_TYPE_BUFFER:
+ return tcu::UVec3(imageSize.x(), 1u, 1u);
+
+ case IMAGE_TYPE_1D_ARRAY:
+ return tcu::UVec3(mipLevelX, imageSize.z(), 1u);
+
+ case IMAGE_TYPE_2D:
+ return tcu::UVec3(mipLevelX, mipLevelY, 1u);
+
+ case IMAGE_TYPE_2D_ARRAY:
+ return tcu::UVec3(mipLevelX, mipLevelY, imageSize.z());
+
+ case IMAGE_TYPE_3D:
+ return tcu::UVec3(mipLevelX, mipLevelY, mipLevelZ);
+
+ case IMAGE_TYPE_CUBE:
+ return tcu::UVec3(mipLevelX, mipLevelY, 6u);
+
+ case IMAGE_TYPE_CUBE_ARRAY:
+ return tcu::UVec3(mipLevelX, mipLevelY, 6u * imageSize.z());
+
+ default:
+ DE_FATAL("Unknown image type");
+ return tcu::UVec3(1u, 1u, 1u);
+ }
+}
+
+tcu::UVec3 getLayerSize(const ImageType imageType, const tcu::UVec3& imageSize)
+{
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D:
+ case IMAGE_TYPE_1D_ARRAY:
+ case IMAGE_TYPE_BUFFER:
+ return tcu::UVec3(imageSize.x(), 1u, 1u);
+
+ case IMAGE_TYPE_2D:
+ case IMAGE_TYPE_2D_ARRAY:
+ case IMAGE_TYPE_CUBE:
+ case IMAGE_TYPE_CUBE_ARRAY:
+ return tcu::UVec3(imageSize.x(), imageSize.y(), 1u);
+
+ case IMAGE_TYPE_3D:
+ return tcu::UVec3(imageSize.x(), imageSize.y(), imageSize.z());
+
+ default:
+ DE_FATAL("Unknown image type");
+ return tcu::UVec3(1u, 1u, 1u);
+ }
+}
+
+deUint32 getNumLayers(const ImageType imageType, const tcu::UVec3& imageSize)
+{
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D:
+ case IMAGE_TYPE_2D:
+ case IMAGE_TYPE_3D:
+ case IMAGE_TYPE_BUFFER:
+ return 1u;
+
+ case IMAGE_TYPE_1D_ARRAY:
+ case IMAGE_TYPE_2D_ARRAY:
+ return imageSize.z();
+
+ case IMAGE_TYPE_CUBE:
+ return 6u;
+
+ case IMAGE_TYPE_CUBE_ARRAY:
+ return imageSize.z() * 6u;
+
+ default:
+ DE_FATAL("Unknown image type");
+ return 0u;
+ }
+}
+
+deUint32 getNumPixels(const ImageType imageType, const tcu::UVec3& imageSize)
+{
+ const tcu::UVec3 gridSize = getShaderGridSize(imageType, imageSize);
+
+ return gridSize.x() * gridSize.y() * gridSize.z();
+}
+
+deUint32 getDimensions(const ImageType imageType)
+{
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D:
+ case IMAGE_TYPE_BUFFER:
+ return 1u;
+
+ case IMAGE_TYPE_1D_ARRAY:
+ case IMAGE_TYPE_2D:
+ return 2u;
+
+ case IMAGE_TYPE_2D_ARRAY:
+ case IMAGE_TYPE_CUBE:
+ case IMAGE_TYPE_CUBE_ARRAY:
+ case IMAGE_TYPE_3D:
+ return 3u;
+
+ default:
+ DE_FATAL("Unknown image type");
+ return 0u;
+ }
+}
+
+deUint32 getLayerDimensions(const ImageType imageType)
+{
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D:
+ case IMAGE_TYPE_BUFFER:
+ case IMAGE_TYPE_1D_ARRAY:
+ return 1u;
+
+ case IMAGE_TYPE_2D:
+ case IMAGE_TYPE_2D_ARRAY:
+ case IMAGE_TYPE_CUBE:
+ case IMAGE_TYPE_CUBE_ARRAY:
+ return 2u;
+
+ case IMAGE_TYPE_3D:
+ return 3u;
+
+ default:
+ DE_FATAL("Unknown image type");
+ return 0u;
+ }
+}
+
+bool isImageSizeSupported(const ImageType imageType, const tcu::UVec3& imageSize, const vk::VkPhysicalDeviceLimits& limits)
+{
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D:
+ return imageSize.x() <= limits.maxImageDimension1D;
+ case IMAGE_TYPE_1D_ARRAY:
+ return imageSize.x() <= limits.maxImageDimension1D &&
+ imageSize.z() <= limits.maxImageArrayLayers;
+ case IMAGE_TYPE_2D:
+ return imageSize.x() <= limits.maxImageDimension2D &&
+ imageSize.y() <= limits.maxImageDimension2D;
+ case IMAGE_TYPE_2D_ARRAY:
+ return imageSize.x() <= limits.maxImageDimension2D &&
+ imageSize.y() <= limits.maxImageDimension2D &&
+ imageSize.z() <= limits.maxImageArrayLayers;
+ case IMAGE_TYPE_CUBE:
+ return imageSize.x() <= limits.maxImageDimensionCube &&
+ imageSize.y() <= limits.maxImageDimensionCube;
+ case IMAGE_TYPE_CUBE_ARRAY:
+ return imageSize.x() <= limits.maxImageDimensionCube &&
+ imageSize.y() <= limits.maxImageDimensionCube &&
+ imageSize.z() <= limits.maxImageArrayLayers;
+ case IMAGE_TYPE_3D:
+ return imageSize.x() <= limits.maxImageDimension3D &&
+ imageSize.y() <= limits.maxImageDimension3D &&
+ imageSize.z() <= limits.maxImageDimension3D;
+ case IMAGE_TYPE_BUFFER:
+ return true;
+ default:
+ DE_FATAL("Unknown image type");
+ return false;
+ }
+}
+
+VkBufferCreateInfo makeBufferCreateInfo (const VkDeviceSize bufferSize,
+ const VkBufferUsageFlags usage)
+{
+ const VkBufferCreateInfo bufferCreateInfo =
+ {
+ VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ 0u, // VkBufferCreateFlags flags;
+ bufferSize, // VkDeviceSize size;
+ usage, // VkBufferUsageFlags usage;
+ VK_SHARING_MODE_EXCLUSIVE, // VkSharingMode sharingMode;
+ 0u, // deUint32 queueFamilyIndexCount;
+ DE_NULL, // const deUint32* pQueueFamilyIndices;
+ };
+ return bufferCreateInfo;
+}
+
+VkBufferImageCopy makeBufferImageCopy (const VkExtent3D extent,
+ const deUint32 layerCount,
+ const deUint32 mipmapLevel,
+ const VkDeviceSize bufferOffset)
+{
+ const VkBufferImageCopy copyParams =
+ {
+ bufferOffset, // VkDeviceSize bufferOffset;
+ 0u, // deUint32 bufferRowLength;
+ 0u, // deUint32 bufferImageHeight;
+ makeImageSubresourceLayers(VK_IMAGE_ASPECT_COLOR_BIT, mipmapLevel, 0u, layerCount), // VkImageSubresourceLayers imageSubresource;
+ makeOffset3D(0, 0, 0), // VkOffset3D imageOffset;
+ extent, // VkExtent3D imageExtent;
+ };
+ return copyParams;
+}
+
+Move<VkCommandPool> makeCommandPool (const DeviceInterface& vk, const VkDevice device, const deUint32 queueFamilyIndex)
+{
+ const VkCommandPoolCreateInfo commandPoolParams =
+ {
+ VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT, // VkCommandPoolCreateFlags flags;
+ queueFamilyIndex, // deUint32 queueFamilyIndex;
+ };
+ return createCommandPool(vk, device, &commandPoolParams);
+}
+
+Move<VkCommandBuffer> makeCommandBuffer (const DeviceInterface& vk, const VkDevice device, const VkCommandPool commandPool)
+{
+ const VkCommandBufferAllocateInfo bufferAllocateParams =
+ {
+ VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ commandPool, // VkCommandPool commandPool;
+ VK_COMMAND_BUFFER_LEVEL_PRIMARY, // VkCommandBufferLevel level;
+ 1u, // deUint32 bufferCount;
+ };
+ return allocateCommandBuffer(vk, device, &bufferAllocateParams);
+}
+
+Move<VkPipelineLayout> makePipelineLayout (const DeviceInterface& vk,
+ const VkDevice device,
+ const VkDescriptorSetLayout descriptorSetLayout)
+{
+ const VkPipelineLayoutCreateInfo pipelineLayoutParams =
+ {
+ VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ 0u, // VkPipelineLayoutCreateFlags flags;
+ 1u, // deUint32 setLayoutCount;
+ &descriptorSetLayout, // const VkDescriptorSetLayout* pSetLayouts;
+ 0u, // deUint32 pushConstantRangeCount;
+ DE_NULL, // const VkPushConstantRange* pPushConstantRanges;
+ };
+ return createPipelineLayout(vk, device, &pipelineLayoutParams);
+}
+
+Move<VkPipeline> makeComputePipeline (const DeviceInterface& vk,
+ const VkDevice device,
+ const VkPipelineLayout pipelineLayout,
+ const VkShaderModule shaderModule)
+{
+ const VkPipelineShaderStageCreateInfo pipelineShaderStageParams =
+ {
+ VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ 0u, // VkPipelineShaderStageCreateFlags flags;
+ VK_SHADER_STAGE_COMPUTE_BIT, // VkShaderStageFlagBits stage;
+ shaderModule, // VkShaderModule module;
+ "main", // const char* pName;
+ DE_NULL, // const VkSpecializationInfo* pSpecializationInfo;
+ };
+ const VkComputePipelineCreateInfo pipelineCreateInfo =
+ {
+ VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ 0u, // VkPipelineCreateFlags flags;
+ pipelineShaderStageParams, // VkPipelineShaderStageCreateInfo stage;
+ pipelineLayout, // VkPipelineLayout layout;
+ DE_NULL, // VkPipeline basePipelineHandle;
+ 0, // deInt32 basePipelineIndex;
+ };
+ return createComputePipeline(vk, device, DE_NULL , &pipelineCreateInfo);
+}
+
+Move<VkBufferView> makeBufferView (const DeviceInterface& vk,
+ const VkDevice vkDevice,
+ const VkBuffer buffer,
+ const VkFormat format,
+ const VkDeviceSize offset,
+ const VkDeviceSize size)
+{
+ const VkBufferViewCreateInfo bufferViewParams =
+ {
+ VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ 0u, // VkBufferViewCreateFlags flags;
+ buffer, // VkBuffer buffer;
+ format, // VkFormat format;
+ offset, // VkDeviceSize offset;
+ size, // VkDeviceSize range;
+ };
+ return createBufferView(vk, vkDevice, &bufferViewParams);
+}
+
+Move<VkImageView> makeImageView (const DeviceInterface& vk,
+ const VkDevice vkDevice,
+ const VkImage image,
+ const VkImageViewType imageViewType,
+ const VkFormat format,
+ const VkImageSubresourceRange subresourceRange)
+{
+ const VkImageViewCreateInfo imageViewParams =
+ {
+ VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ 0u, // VkImageViewCreateFlags flags;
+ image, // VkImage image;
+ imageViewType, // VkImageViewType viewType;
+ format, // VkFormat format;
+ makeComponentMappingRGBA(), // VkComponentMapping components;
+ subresourceRange, // VkImageSubresourceRange subresourceRange;
+ };
+ return createImageView(vk, vkDevice, &imageViewParams);
+}
+
+Move<VkDescriptorSet> makeDescriptorSet (const DeviceInterface& vk,
+ const VkDevice device,
+ const VkDescriptorPool descriptorPool,
+ const VkDescriptorSetLayout setLayout)
+{
+ const VkDescriptorSetAllocateInfo allocateParams =
+ {
+ VK_STRUCTURE_TYPE_DESCRIPTOR_SET_ALLOCATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ descriptorPool, // VkDescriptorPool descriptorPool;
+ 1u, // deUint32 setLayoutCount;
+ &setLayout, // const VkDescriptorSetLayout* pSetLayouts;
+ };
+ return allocateDescriptorSet(vk, device, &allocateParams);
+}
+
+Move<VkSemaphore> makeSemaphore (const DeviceInterface& vk, const VkDevice device)
+{
+ const VkSemaphoreCreateInfo semaphoreCreateInfo =
+ {
+ VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
+ DE_NULL,
+ 0u
+ };
+
+ return createSemaphore(vk, device, &semaphoreCreateInfo);
+}
+
+VkBufferMemoryBarrier makeBufferMemoryBarrier (const VkAccessFlags srcAccessMask,
+ const VkAccessFlags dstAccessMask,
+ const VkBuffer buffer,
+ const VkDeviceSize offset,
+ const VkDeviceSize bufferSizeBytes)
+{
+ const VkBufferMemoryBarrier barrier =
+ {
+ VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ srcAccessMask, // VkAccessFlags srcAccessMask;
+ dstAccessMask, // VkAccessFlags dstAccessMask;
+ VK_QUEUE_FAMILY_IGNORED, // deUint32 srcQueueFamilyIndex;
+ VK_QUEUE_FAMILY_IGNORED, // deUint32 destQueueFamilyIndex;
+ buffer, // VkBuffer buffer;
+ offset, // VkDeviceSize offset;
+ bufferSizeBytes, // VkDeviceSize size;
+ };
+ return barrier;
+}
+
+VkImageMemoryBarrier makeImageMemoryBarrier (const VkAccessFlags srcAccessMask,
+ const VkAccessFlags dstAccessMask,
+ const VkImageLayout oldLayout,
+ const VkImageLayout newLayout,
+ const VkImage image,
+ const VkImageSubresourceRange subresourceRange)
+{
+ const VkImageMemoryBarrier barrier =
+ {
+ VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ srcAccessMask, // VkAccessFlags outputMask;
+ dstAccessMask, // VkAccessFlags inputMask;
+ oldLayout, // VkImageLayout oldLayout;
+ newLayout, // VkImageLayout newLayout;
+ VK_QUEUE_FAMILY_IGNORED, // deUint32 srcQueueFamilyIndex;
+ VK_QUEUE_FAMILY_IGNORED, // deUint32 destQueueFamilyIndex;
+ image, // VkImage image;
+ subresourceRange, // VkImageSubresourceRange subresourceRange;
+ };
+ return barrier;
+}
+
+vk::VkMemoryBarrier makeMemoryBarrier (const vk::VkAccessFlags srcAccessMask,
+ const vk::VkAccessFlags dstAccessMask)
+{
+ const VkMemoryBarrier barrier =
+ {
+ VK_STRUCTURE_TYPE_MEMORY_BARRIER, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ srcAccessMask, // VkAccessFlags outputMask;
+ dstAccessMask, // VkAccessFlags inputMask;
+ };
+ return barrier;
+}
+
+void beginCommandBuffer (const DeviceInterface& vk, const VkCommandBuffer commandBuffer)
+{
+ const VkCommandBufferBeginInfo commandBufBeginParams =
+ {
+ VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ 0u, // VkCommandBufferUsageFlags flags;
+ (const VkCommandBufferInheritanceInfo*)DE_NULL,
+ };
+ VK_CHECK(vk.beginCommandBuffer(commandBuffer, &commandBufBeginParams));
+}
+
+void endCommandBuffer (const DeviceInterface& vk, const VkCommandBuffer commandBuffer)
+{
+ VK_CHECK(vk.endCommandBuffer(commandBuffer));
+}
+
+void submitCommands (const DeviceInterface& vk,
+ const VkQueue queue,
+ const VkCommandBuffer commandBuffer,
+ const deUint32 waitSemaphoreCount,
+ const VkSemaphore* pWaitSemaphores,
+ const VkPipelineStageFlags* pWaitDstStageMask,
+ const deUint32 signalSemaphoreCount,
+ const VkSemaphore* pSignalSemaphores)
+{
+ const VkSubmitInfo submitInfo =
+ {
+ VK_STRUCTURE_TYPE_SUBMIT_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ waitSemaphoreCount, // deUint32 waitSemaphoreCount;
+ pWaitSemaphores, // const VkSemaphore* pWaitSemaphores;
+ pWaitDstStageMask, // const VkPipelineStageFlags* pWaitDstStageMask;
+ 1u, // deUint32 commandBufferCount;
+ &commandBuffer, // const VkCommandBuffer* pCommandBuffers;
+ signalSemaphoreCount, // deUint32 signalSemaphoreCount;
+ pSignalSemaphores, // const VkSemaphore* pSignalSemaphores;
+ };
+
+ VK_CHECK(vk.queueSubmit(queue, 1u, &submitInfo, DE_NULL));
+}
+
+void submitCommandsAndWait (const DeviceInterface& vk,
+ const VkDevice device,
+ const VkQueue queue,
+ const VkCommandBuffer commandBuffer,
+ const deUint32 waitSemaphoreCount,
+ const VkSemaphore* pWaitSemaphores,
+ const VkPipelineStageFlags* pWaitDstStageMask,
+ const deUint32 signalSemaphoreCount,
+ const VkSemaphore* pSignalSemaphores)
+{
+ const VkFenceCreateInfo fenceParams =
+ {
+ VK_STRUCTURE_TYPE_FENCE_CREATE_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ 0u, // VkFenceCreateFlags flags;
+ };
+ const Unique<VkFence> fence(createFence(vk, device, &fenceParams));
+
+ const VkSubmitInfo submitInfo =
+ {
+ VK_STRUCTURE_TYPE_SUBMIT_INFO, // VkStructureType sType;
+ DE_NULL, // const void* pNext;
+ waitSemaphoreCount, // deUint32 waitSemaphoreCount;
+ pWaitSemaphores, // const VkSemaphore* pWaitSemaphores;
+ pWaitDstStageMask, // const VkPipelineStageFlags* pWaitDstStageMask;
+ 1u, // deUint32 commandBufferCount;
+ &commandBuffer, // const VkCommandBuffer* pCommandBuffers;
+ signalSemaphoreCount, // deUint32 signalSemaphoreCount;
+ pSignalSemaphores, // const VkSemaphore* pSignalSemaphores;
+ };
+
+ VK_CHECK(vk.queueSubmit(queue, 1u, &submitInfo, *fence));
+ VK_CHECK(vk.waitForFences(device, 1u, &fence.get(), DE_TRUE, ~0ull));
+}
+
+VkImageType mapImageType (const ImageType imageType)
+{
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D:
+ case IMAGE_TYPE_1D_ARRAY:
+ case IMAGE_TYPE_BUFFER:
+ return VK_IMAGE_TYPE_1D;
+
+ case IMAGE_TYPE_2D:
+ case IMAGE_TYPE_2D_ARRAY:
+ case IMAGE_TYPE_CUBE:
+ case IMAGE_TYPE_CUBE_ARRAY:
+ return VK_IMAGE_TYPE_2D;
+
+ case IMAGE_TYPE_3D:
+ return VK_IMAGE_TYPE_3D;
+
+ default:
+ DE_ASSERT(false);
+ return VK_IMAGE_TYPE_LAST;
+ }
+}
+
+VkImageViewType mapImageViewType (const ImageType imageType)
+{
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D: return VK_IMAGE_VIEW_TYPE_1D;
+ case IMAGE_TYPE_1D_ARRAY: return VK_IMAGE_VIEW_TYPE_1D_ARRAY;
+ case IMAGE_TYPE_2D: return VK_IMAGE_VIEW_TYPE_2D;
+ case IMAGE_TYPE_2D_ARRAY: return VK_IMAGE_VIEW_TYPE_2D_ARRAY;
+ case IMAGE_TYPE_3D: return VK_IMAGE_VIEW_TYPE_3D;
+ case IMAGE_TYPE_CUBE: return VK_IMAGE_VIEW_TYPE_CUBE;
+ case IMAGE_TYPE_CUBE_ARRAY: return VK_IMAGE_VIEW_TYPE_CUBE_ARRAY;
+
+ default:
+ DE_ASSERT(false);
+ return VK_IMAGE_VIEW_TYPE_LAST;
+ }
+}
+
+std::string getImageTypeName (const ImageType imageType)
+{
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D: return "1d";
+ case IMAGE_TYPE_1D_ARRAY: return "1d_array";
+ case IMAGE_TYPE_2D: return "2d";
+ case IMAGE_TYPE_2D_ARRAY: return "2d_array";
+ case IMAGE_TYPE_3D: return "3d";
+ case IMAGE_TYPE_CUBE: return "cube";
+ case IMAGE_TYPE_CUBE_ARRAY: return "cube_array";
+ case IMAGE_TYPE_BUFFER: return "buffer";
+
+ default:
+ DE_ASSERT(false);
+ return "";
+ }
+}
+
+std::string getShaderImageType (const tcu::TextureFormat& format, const ImageType imageType)
+{
+ std::string formatPart = tcu::getTextureChannelClass(format.type) == tcu::TEXTURECHANNELCLASS_UNSIGNED_INTEGER ? "u" :
+ tcu::getTextureChannelClass(format.type) == tcu::TEXTURECHANNELCLASS_SIGNED_INTEGER ? "i" : "";
+
+ std::string imageTypePart;
+ switch (imageType)
+ {
+ case IMAGE_TYPE_1D: imageTypePart = "1D"; break;
+ case IMAGE_TYPE_1D_ARRAY: imageTypePart = "1DArray"; break;
+ case IMAGE_TYPE_2D: imageTypePart = "2D"; break;
+ case IMAGE_TYPE_2D_ARRAY: imageTypePart = "2DArray"; break;
+ case IMAGE_TYPE_3D: imageTypePart = "3D"; break;
+ case IMAGE_TYPE_CUBE: imageTypePart = "Cube"; break;
+ case IMAGE_TYPE_CUBE_ARRAY: imageTypePart = "CubeArray"; break;
+ case IMAGE_TYPE_BUFFER: imageTypePart = "Buffer"; break;
+
+ default:
+ DE_ASSERT(false);
+ }
+
+ return formatPart + "image" + imageTypePart;
+}
+
+std::string getShaderImageFormatQualifier (const tcu::TextureFormat& format)
+{
+ const char* orderPart;
+ const char* typePart;
+
+ switch (format.order)
+ {
+ case tcu::TextureFormat::R: orderPart = "r"; break;
+ case tcu::TextureFormat::RG: orderPart = "rg"; break;
+ case tcu::TextureFormat::RGB: orderPart = "rgb"; break;
+ case tcu::TextureFormat::RGBA: orderPart = "rgba"; break;
+
+ default:
+ DE_ASSERT(false);
+ orderPart = DE_NULL;
+ }
+
+ switch (format.type)
+ {
+ case tcu::TextureFormat::FLOAT: typePart = "32f"; break;
+ case tcu::TextureFormat::HALF_FLOAT: typePart = "16f"; break;
+
+ case tcu::TextureFormat::UNSIGNED_INT32: typePart = "32ui"; break;
+ case tcu::TextureFormat::UNSIGNED_INT16: typePart = "16ui"; break;
+ case tcu::TextureFormat::UNSIGNED_INT8: typePart = "8ui"; break;
+
+ case tcu::TextureFormat::SIGNED_INT32: typePart = "32i"; break;
+ case tcu::TextureFormat::SIGNED_INT16: typePart = "16i"; break;
+ case tcu::TextureFormat::SIGNED_INT8: typePart = "8i"; break;
+
+ case tcu::TextureFormat::UNORM_INT16: typePart = "16"; break;
+ case tcu::TextureFormat::UNORM_INT8: typePart = "8"; break;
+
+ case tcu::TextureFormat::SNORM_INT16: typePart = "16_snorm"; break;
+ case tcu::TextureFormat::SNORM_INT8: typePart = "8_snorm"; break;
+
+ default:
+ DE_ASSERT(false);
+ typePart = DE_NULL;
+ }
+
+ return std::string() + orderPart + typePart;
+}
+
+VkExtent3D mipLevelExtents (const VkExtent3D& baseExtents, const deUint32 mipLevel)
+{
+ VkExtent3D result;
+
+ result.width = std::max(baseExtents.width >> mipLevel, 1u);
+ result.height = std::max(baseExtents.height >> mipLevel, 1u);
+ result.depth = std::max(baseExtents.depth >> mipLevel, 1u);
+
+ return result;
+}
+
+deUint32 getImageMaxMipLevels (const VkImageFormatProperties& imageFormatProperties, const VkImageCreateInfo &imageInfo)
+{
+ const deUint32 widestEdge = std::max(std::max(imageInfo.extent.width, imageInfo.extent.height), imageInfo.extent.depth);
+
+ return std::min(static_cast<deUint32>(deFloatLog2(static_cast<float>(widestEdge))) + 1u, imageFormatProperties.maxMipLevels);
+}
+
+deUint32 getImageMipLevelSizeInBytes (const VkExtent3D& baseExtents, const deUint32 layersCount, const tcu::TextureFormat& format, const deUint32 mipmapLevel)
+{
+ const VkExtent3D extents = mipLevelExtents(baseExtents, mipmapLevel);
+
+ return extents.width * extents.height * extents.depth * layersCount * tcu::getPixelSize(format);
+}
+
+deUint32 getImageSizeInBytes (const VkExtent3D& baseExtents, const deUint32 layersCount, const tcu::TextureFormat& format, const deUint32 mipmapLevelsCount)
+{
+ deUint32 imageSizeInBytes = 0;
+ for (deUint32 mipmapLevel = 0; mipmapLevel < mipmapLevelsCount; ++mipmapLevel)
+ {
+ imageSizeInBytes += getImageMipLevelSizeInBytes(baseExtents, layersCount, format, mipmapLevel);
+ }
+
+ return imageSizeInBytes;
+}
+
+} // sparse
+} // vkt
--- /dev/null
+#ifndef _VKTSPARSERESOURCESTESTSUTIL_HPP
+#define _VKTSPARSERESOURCESTESTSUTIL_HPP
+/*------------------------------------------------------------------------
+ * Vulkan Conformance Tests
+ * ------------------------
+ *
+ * Copyright (c) 2016 The Khronos Group Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and/or associated documentation files (the
+ * "Materials"), to deal in the Materials without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Materials, and to
+ * permit persons to whom the Materials are furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice(s) and this permission notice shall be included
+ * in all copies or substantial portions of the Materials.
+ *
+ * THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
+ *
+ *//*!
+ * \file vktSparseResourcesTestsUtil.hpp
+ * \brief Sparse Resources Tests Utility Classes
+ *//*--------------------------------------------------------------------*/
+
+#include "vkDefs.hpp"
+#include "vkMemUtil.hpp"
+#include "vkRef.hpp"
+#include "vkRefUtil.hpp"
+#include "vkPrograms.hpp"
+#include "vkTypeUtil.hpp"
+#include "vkImageUtil.hpp"
+#include "deSharedPtr.hpp"
+
+namespace vkt
+{
+namespace sparse
+{
+
+enum ImageType
+{
+ IMAGE_TYPE_1D = 0,
+ IMAGE_TYPE_1D_ARRAY,
+ IMAGE_TYPE_2D,
+ IMAGE_TYPE_2D_ARRAY,
+ IMAGE_TYPE_3D,
+ IMAGE_TYPE_CUBE,
+ IMAGE_TYPE_CUBE_ARRAY,
+ IMAGE_TYPE_BUFFER,
+
+ IMAGE_TYPE_LAST
+};
+
+vk::VkImageType mapImageType (const ImageType imageType);
+vk::VkImageViewType mapImageViewType (const ImageType imageType);
+std::string getImageTypeName (const ImageType imageType);
+std::string getShaderImageType (const tcu::TextureFormat& format, const ImageType imageType);
+std::string getShaderImageFormatQualifier (const tcu::TextureFormat& format);
+
+class Buffer
+{
+public:
+ Buffer (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ vk::Allocator& allocator,
+ const vk::VkBufferCreateInfo& bufferCreateInfo,
+ const vk::MemoryRequirement memoryRequirement);
+
+ const vk::VkBuffer& get (void) const { return *m_buffer; }
+ const vk::VkBuffer& operator* (void) const { return get(); }
+ vk::Allocation& getAllocation (void) const { return *m_allocation; }
+
+private:
+ vk::Unique<vk::VkBuffer> m_buffer;
+ de::UniquePtr<vk::Allocation> m_allocation;
+
+ Buffer (const Buffer&);
+ Buffer& operator= (const Buffer&);
+};
+
+class Image
+{
+public:
+ Image (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ vk::Allocator& allocator,
+ const vk::VkImageCreateInfo& imageCreateInfo,
+ const vk::MemoryRequirement memoryRequirement);
+
+ const vk::VkImage& get (void) const { return *m_image; }
+ const vk::VkImage& operator* (void) const { return get(); }
+ vk::Allocation& getAllocation (void) const { return *m_allocation; }
+
+private:
+ vk::Unique<vk::VkImage> m_image;
+ de::UniquePtr<vk::Allocation> m_allocation;
+
+ Image (const Image&);
+ Image& operator= (const Image&);
+};
+
+tcu::UVec3 getShaderGridSize (const ImageType imageType,
+ const tcu::UVec3& imageSize,
+ const deUint32 mipLevel = 0); //!< Size used for addresing image in a shader
+tcu::UVec3 getLayerSize (const ImageType imageType, const tcu::UVec3& imageSize); //!< Size of a single layer
+deUint32 getNumLayers (const ImageType imageType, const tcu::UVec3& imageSize); //!< Number of array layers (for array and cube types)
+deUint32 getNumPixels (const ImageType imageType, const tcu::UVec3& imageSize); //!< Number of texels in an image
+deUint32 getDimensions (const ImageType imageType); //!< Coordinate dimension used for addressing (e.g. 3 (x,y,z) for 2d array)
+deUint32 getLayerDimensions (const ImageType imageType); //!< Coordinate dimension used for addressing a single layer (e.g. 2 (x,y) for 2d array)
+bool isImageSizeSupported (const ImageType imageType,
+ const tcu::UVec3& imageSize,
+ const vk::VkPhysicalDeviceLimits& limits); //!< Check is the requested image size is not above device limits
+
+vk::Move<vk::VkCommandPool> makeCommandPool (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ const deUint32 queueFamilyIndex);
+
+vk::Move<vk::VkCommandBuffer> makeCommandBuffer (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ const vk::VkCommandPool commandPool);
+
+vk::Move<vk::VkPipelineLayout> makePipelineLayout (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ const vk::VkDescriptorSetLayout descriptorSetLayout);
+
+vk::Move<vk::VkPipeline> makeComputePipeline (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ const vk::VkPipelineLayout pipelineLayout,
+ const vk::VkShaderModule shaderModule);
+
+vk::Move<vk::VkBufferView> makeBufferView (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ const vk::VkBuffer buffer,
+ const vk::VkFormat format,
+ const vk::VkDeviceSize offset,
+ const vk::VkDeviceSize size);
+
+vk::Move<vk::VkImageView> makeImageView (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ const vk::VkImage image,
+ const vk::VkImageViewType imageViewType,
+ const vk::VkFormat format,
+ const vk::VkImageSubresourceRange subresourceRange);
+
+vk::Move<vk::VkDescriptorSet> makeDescriptorSet (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ const vk::VkDescriptorPool descriptorPool,
+ const vk::VkDescriptorSetLayout setLayout);
+
+vk::Move<vk::VkSemaphore> makeSemaphore (const vk::DeviceInterface& vk,
+ const vk::VkDevice device);
+
+vk::VkBufferCreateInfo makeBufferCreateInfo (const vk::VkDeviceSize bufferSize,
+ const vk::VkBufferUsageFlags usage);
+
+vk::VkBufferImageCopy makeBufferImageCopy (const vk::VkExtent3D extent,
+ const deUint32 layersCount,
+ const deUint32 mipmapLevel = 0u,
+ const vk::VkDeviceSize bufferOffset = 0ull);
+
+vk::VkBufferMemoryBarrier makeBufferMemoryBarrier (const vk::VkAccessFlags srcAccessMask,
+ const vk::VkAccessFlags dstAccessMask,
+ const vk::VkBuffer buffer,
+ const vk::VkDeviceSize offset,
+ const vk::VkDeviceSize bufferSizeBytes);
+
+vk::VkImageMemoryBarrier makeImageMemoryBarrier (const vk::VkAccessFlags srcAccessMask,
+ const vk::VkAccessFlags dstAccessMask,
+ const vk::VkImageLayout oldLayout,
+ const vk::VkImageLayout newLayout,
+ const vk::VkImage image,
+ const vk::VkImageSubresourceRange subresourceRange);
+
+vk::VkMemoryBarrier makeMemoryBarrier (const vk::VkAccessFlags srcAccessMask,
+ const vk::VkAccessFlags dstAccessMask);
+
+void beginCommandBuffer (const vk::DeviceInterface& vk,
+ const vk::VkCommandBuffer cmdBuffer);
+
+void endCommandBuffer (const vk::DeviceInterface& vk,
+ const vk::VkCommandBuffer cmdBuffer);
+
+void submitCommands (const vk::DeviceInterface& vk,
+ const vk::VkQueue queue,
+ const vk::VkCommandBuffer cmdBuffer,
+ const deUint32 waitSemaphoreCount = 0,
+ const vk::VkSemaphore* pWaitSemaphores = DE_NULL,
+ const vk::VkPipelineStageFlags* pWaitDstStageMask = DE_NULL,
+ const deUint32 signalSemaphoreCount = 0,
+ const vk::VkSemaphore* pSignalSemaphores = DE_NULL);
+
+void submitCommandsAndWait (const vk::DeviceInterface& vk,
+ const vk::VkDevice device,
+ const vk::VkQueue queue,
+ const vk::VkCommandBuffer cmdBuffer,
+ const deUint32 waitSemaphoreCount = 0,
+ const vk::VkSemaphore* pWaitSemaphores = DE_NULL,
+ const vk::VkPipelineStageFlags* pWaitDstStageMask = DE_NULL,
+ const deUint32 signalSemaphoreCount = 0,
+ const vk::VkSemaphore* pSignalSemaphores = DE_NULL);
+
+vk::VkExtent3D mipLevelExtents (const vk::VkExtent3D& baseExtents,
+ const deUint32 mipLevel);
+
+tcu::UVec3 mipLevelExtents (const tcu::UVec3& baseExtents,
+ const deUint32 mipLevel);
+
+deUint32 getImageMaxMipLevels (const vk::VkImageFormatProperties& imageFormatProperties,
+ const vk::VkImageCreateInfo& imageInfo);
+
+deUint32 getImageMipLevelSizeInBytes (const vk::VkExtent3D& baseExtents,
+ const deUint32 layersCount,
+ const tcu::TextureFormat& format,
+ const deUint32 mipmapLevel);
+
+deUint32 getImageSizeInBytes (const vk::VkExtent3D& baseExtents,
+ const deUint32 layersCount,
+ const tcu::TextureFormat& format,
+ const deUint32 mipmapLevelsCount = 1u);
+
+template<typename T>
+inline de::SharedPtr<vk::Unique<T> > makeVkSharedPtr (vk::Move<T> vkMove)
+{
+ return de::SharedPtr<vk::Unique<T> >(new vk::Unique<T>(vkMove));
+}
+
+} // sparse
+} // vkt
+
+#endif // _VKTSPARSERESOURCESTESTSUTIL_HPP
#include "vktComputeTests.hpp"
#include "vktImageTests.hpp"
#include "vktInfoTests.hpp"
+#include "vktSparseResourcesTests.hpp"
#include <vector>
#include <sstream>
addChild(Draw::createTests (m_testCtx));
addChild(compute::createTests (m_testCtx));
addChild(image::createTests (m_testCtx));
+ addChild(sparse::createTests (m_testCtx));
}
} // vkt