*.so.*
icd/common/libicd.a
icd/intel/intel_gpa.c
-loader/dispatch.c
-loader/table_ops.h
-tests/xgl_image_tests
-tests/xgl_render_tests
-tests/xglbase
-tests/xglinfo
-layers/xgl_dispatch_table_helper.h
-layers/xgl_enum_string_helper.h
-layers/xgl_generic_intercept_proc_helper.h
-layers/xgl_struct_string_helper.h
-layers/xgl_struct_wrappers.cpp
-layers/xgl_struct_wrappers.h
_out64
out32/*
out64/*
*.vcxproj
*.sdf
*.filters
+build
+dbuild
Example debug build:
```
-cd YOUR_DEV_DIRECTORY # cd to the root of the xgl git repository
+cd YOUR_DEV_DIRECTORY # cd to the root of the vk git repository
export KHRONOS_ACCOUNT_NAME= <subversion login name for svn checkout of BIL>
./update_external_sources.sh # fetches and builds glslang, llvm, LunarGLASS, and BIL
cmake -H. -Bdbuild -DCMAKE_BUILD_TYPE=Debug
make
```
-To run XGL programs you must tell the icd loader where to find the libraries. Set the
-environment variable LIBXGL_DRIVERS_PATH to the driver path. For example:
+To run VK programs you must tell the icd loader where to find the libraries. Set the
+environment variable LIBVK_DRIVERS_PATH to the driver path. For example:
```
-export LIBXGL_DRIVERS_PATH=$PWD/icd/intel
+export LIBVK_DRIVERS_PATH=$PWD/icd/intel
```
-To enable debug and validation layers with your XGL programs you must tell the icd loader
-where to find the layer libraries. Set the environment variable LIBXGL_LAYERS_PATH to
-the layer folder and indicate the layers you want loaded via LIBXGL_LAYER_NAMES.
+To enable debug and validation layers with your VK programs you must tell the icd loader
+where to find the layer libraries. Set the environment variable LIBVK_LAYERS_PATH to
+the layer folder and indicate the layers you want loaded via LIBVK_LAYER_NAMES.
For example, to enable the APIDump and DrawState layers, do:
```
-export LIBXGL_LAYERS_PATH=$PWD/layers
-export LIBXGL_LAYER_NAMES=APIDump:DrawState
+export LIBVK_LAYERS_PATH=$PWD/layers
+export LIBVK_LAYER_NAMES=APIDump:DrawState
```
##Linux Test
The test executibles can be found in the dbuild/tests directory. The tests use the Google
gtest infrastructure. Tests available so far:
-- xglinfo: Report GPU properties
-- xglbase: Test basic entry points
-- xgl_blit_tests: Test XGL Blits (copy, clear, and resolve)
-- xgl_image_tests: Test XGL image related calls needed by render_test
-- xgl_render_tests: Render a single triangle with XGL. Triangle will be in a .ppm in
+- vkinfo: Report GPU properties
+- vkbase: Test basic entry points
+- vk_blit_tests: Test VK Blits (copy, clear, and resolve)
+- vk_image_tests: Test VK image related calls needed by render_test
+- vk_render_tests: Render a single triangle with VK. Triangle will be in a .ppm in
the current directory at the end of the test.
##Linux Demos
Example debug build:
```
-cd GL-Next # cd to the root of the xgl git repository
+cd GL-Next # cd to the root of the vk git repository
mkdir _out64
cd _out64
cmake -G "Visual Studio 12 Win64" -DCMAKE_BUILD_TYPE=Debug ..
```
-At this point, you can use Windows Explorer to launch Visual Studio by double-clicking on the "XGL.sln" file in the _out64 folder. Once Visual Studio comes up, you can select "Debug" or "Release" from a drop-down list. You can start a build with either the menu (Build->Build Solution), or a keyboard shortcut (Ctrl+Shift+B). As part of the build process, Python scripts will create additional Visual Studio files and projects, along with additional source files. All of these auto-generated files are under the "_out64" folder.
+At this point, you can use Windows Explorer to launch Visual Studio by double-clicking on the "VK.sln" file in the _out64 folder. Once Visual Studio comes up, you can select "Debug" or "Release" from a drop-down list. You can start a build with either the menu (Build->Build Solution), or a keyboard shortcut (Ctrl+Shift+B). As part of the build process, Python scripts will create additional Visual Studio files and projects, along with additional source files. All of these auto-generated files are under the "_out64" folder.
-XGL programs must be able to find and use the XGL.dll libary. Make sure it is either installed in the C:\Windows\System32 folder, or the PATH enviroment variable includes the folder that it is located in.
+VK programs must be able to find and use the VK.dll libary. Make sure it is either installed in the C:\Windows\System32 folder, or the PATH enviroment variable includes the folder that it is located in.
-To run XGL programs you must have an appropriate ICD (installable client driver) that is either installed in the C:\Windows\System32 folder, or pointed to by the registry and/or an environment variable:
+To run VK programs you must have an appropriate ICD (installable client driver) that is either installed in the C:\Windows\System32 folder, or pointed to by the registry and/or an environment variable:
- Registry:
- Root Key: HKEY_LOCAL_MACHINE
- - Key: "SOFTWARE\XGL"
- - Value: "XGL_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
-- Environment Variable: "XGL_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
+ - Key: "SOFTWARE\VK"
+ - Value: "VK_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
+- Environment Variable: "VK_DRIVERS_PATH" (semi-colon-delimited set of folders to look for ICDs)
Note: If both the registry value and environment variable are used, they are concatenated into a new semi-colon-delimited list of folders.
- Within the search box, type "environment variable" and click on "Edit the system environment variables" (or navigate there via "System and Security->System->Advanced system settings").
- This will launch a window with several tabs, one of which is "Advanced". Click on the "Environment Variables..." button.
- For either "User variables" or "System variables" click "New...".
-- Enter "XGL_DRIVERS_PATH" as the variable name, and an appropriate Windows path to where your driver DLL is (e.g. C:\Users\username\GL-Next\_out64\icd\drivername\Debug).
+- Enter "VK_DRIVERS_PATH" as the variable name, and an appropriate Windows path to where your driver DLL is (e.g. C:\Users\username\GL-Next\_out64\icd\drivername\Debug).
It is possible to specify multiple icd folders. Simply use a semi-colon (i.e. ";") to separate folders in the environment variable.
-The icd loader searches in all of the folders for files that are named "XGL_*.dll" (e.g. "XGL_foo.dll"). It attempts to dynamically load these files, and look for appropriate functions.
+The icd loader searches in all of the folders for files that are named "VK_*.dll" (e.g. "VK_foo.dll"). It attempts to dynamically load these files, and look for appropriate functions.
-To enable debug and validation layers with your XGL programs you must tell the icd loader
+To enable debug and validation layers with your VK programs you must tell the icd loader
where to find the layer libraries, and which ones you desire to use. The default folder for layers is C:\Windows\System32. Again, this can be pointed to by the registry and/or an environment variable:
- Registry:
- Root Key: HKEY_LOCAL_MACHINE
- - Key: "System\XGL"
- - Value: "XGL_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
- - Value: "XGL_LAYER_NAMES" (semi-colon-delimited list of layer names)
+ - Key: "System\VK"
+ - Value: "VK_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
+ - Value: "VK_LAYER_NAMES" (semi-colon-delimited list of layer names)
- Environment Variables:
- - "XGL_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
- - "XGL_LAYER_NAMES" (semi-colon-delimited list of layer names)
+ - "VK_LAYERS_PATH" (semi-colon-delimited set of folders to look for layers)
+ - "VK_LAYER_NAMES" (semi-colon-delimited list of layer names)
Note: If both the registry value and environment variable are used, they are concatenated into a new semi-colon-delimited list.
-The icd loader searches in all of the folders for files that are named "XGLLayer*.dll" (e.g. "XGLLayerParamChecker.dll"). It attempts to dynamically load these files, and look for appropriate functions.
+The icd loader searches in all of the folders for files that are named "VKLayer*.dll" (e.g. "VKLayerParamChecker.dll"). It attempts to dynamically load these files, and look for appropriate functions.
-# Explicit GL (XGL) Ecosystem Components\r
+# Explicit GL (VK) Ecosystem Components\r
*Version 0.8, 04 Feb 2015*\r
\r
-This project provides *open source* tools for XGL Developers.\r
+This project provides *open source* tools for VK Developers.\r
\r
## Introduction\r
\r
-XGL is an Explicit API, enabling direct control over how GPUs actually work. No validation, shader recompilation, memory management or synchronization is done inside an XGL driver. Applications have full control and responsibility. Any errors in how XGL is used are likely to result in a crash. This project provides layered utility libraries to ease development and help guide developers to proven safe patterns.\r
+VK is an Explicit API, enabling direct control over how GPUs actually work. No validation, shader recompilation, memory management or synchronization is done inside an VK driver. Applications have full control and responsibility. Any errors in how VK is used are likely to result in a crash. This project provides layered utility libraries to ease development and help guide developers to proven safe patterns.\r
\r
-New with XGL in an extensible layered architecture that enables significant innovation in tools:\r
+New with VK in an extensible layered architecture that enables significant innovation in tools:\r
- Cross IHV support enables tools vendors to plug into a common, extensible layer architecture\r
- Layered tools during development enable validating, debugging and profiling without production performance impact\r
- Modular validation architecture encourages many fine-grained layers--and new layers can be added easily\r
demos for GDC.\r
\r
The following components are available:\r
-- XGL Library and header files, which include:\r
+- VK Library and header files, which include:\r
- [*ICD Loader*](loader) and [*Layer Manager*](layers/README.md)\r
- - Snapshot of *XGL* and *BIL* header files from [*Khronos*](www.khronos.org)\r
+ - Snapshot of *VK* and *BIL* header files from [*Khronos*](www.khronos.org)\r
\r
- [*GLAVE Debugger*](tools/glave)\r
\r
\r
## New\r
\r
-- Updated loader, driver, demos, tests and many tools to use "alpha" xgl.h (~ version 47).\r
+- Updated loader, driver, demos, tests and many tools to use "alpha" vulkan.h (~ version 47).\r
Supports new resource binding model, memory allocation, pixel FORMATs and\r
other updates.\r
APIDump layer is working with these new API elements.\r
\r
## Prior updates\r
\r
-- XGL API trace and capture tools. See tools/glave/README.md for details.\r
+- VK API trace and capture tools. See tools/glave/README.md for details.\r
- Sample driver now supports multiple render targets. Added TriangleMRT to test that functionality.\r
-- Added XGL_SLOT_SHADER_TEXTURE_RESOURCE to xgl.h as a descriptor slot type to work around confusion in GLSL\r
+- Added VK_SLOT_SHADER_TEXTURE_RESOURCE to vulkan.h as a descriptor slot type to work around confusion in GLSL\r
between textures and buffers as shader resources.\r
- Misc. fixes for layers and Intel sample driver\r
- Added mutex to APIDump, APIDumpFile and DrawState to prevent apparent threading issues using printf\r
\r
## References\r
This version of the components are written based on the following preliminary specs and proposals:\r
-- [**XGL Programers Reference**, 1 Jul 2014](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/AMD/Explicit%20GL%20Programming%20Guide%20and%20API%20Reference.pdf)\r
+- [**VK Programers Reference**, 1 Jul 2014](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/AMD/Explicit%20GL%20Programming%20Guide%20and%20API%20Reference.pdf)\r
- [**BIL**, revision 29](https://cvs.khronos.org/svn/repos/oglc/trunk/nextgen/proposals/BIL/Specification/BIL.html)\r
\r
## License\r
This work is intended to be released as open source under a BSD-style\r
-license once the XGL specification is public. Until that time, this work\r
-is covered by the Khronos NDA governing the details of the XGL API.\r
+license once the VK specification is public. Until that time, this work\r
+is covered by the Khronos NDA governing the details of the VK API.\r
\r
## Acknowledgements\r
While this project is being developed by LunarG, Inc; there are many other\r
companies and individuals making this possible: Valve Software, funding\r
project development; Intel Corporation, providing full hardware specifications\r
-and valuable technical feedback; AMD, providing XGL spec editor contributions;\r
+and valuable technical feedback; AMD, providing VK spec editor contributions;\r
ARM, contributing a Chairman for this working group within Khronos; Nvidia,\r
providing an initial co-editor for the spec; Qualcomm for picking up the\r
co-editor's chair; and Khronos, for providing hosting within GitHub.\r
\r
## Contact\r
If you have questions or comments about this driver; or you would like to contribute\r
-directly to this effort, please contact us at XGL@LunarG.com; or if you prefer, via\r
+directly to this effort, please contact us at VK@LunarG.com; or if you prefer, via\r
the GL Common mailing list: gl_common@khronos.org\r
)
file(COPY ${TEXTURES} DESTINATION ${CMAKE_BINARY_DIR}/demos)
-set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES")
-set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES")
+set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES")
+set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES")
if(WIN32)
set (LIBRARIES "XGL")
#include <assert.h>
#include <xcb/xcb.h>
-#include <xgl.h>
-#include <xglDbg.h>
-#include <xglWsiX11Ext.h>
+#include <vulkan.h>
+#include <vkDbg.h>
+#include <vkWsiX11Ext.h>
#include "icd-spv.h"
* structure to track all objects related to a texture.
*/
struct texture_object {
- XGL_SAMPLER sampler;
+ VK_SAMPLER sampler;
- XGL_IMAGE image;
- XGL_IMAGE_LAYOUT imageLayout;
+ VK_IMAGE image;
+ VK_IMAGE_LAYOUT imageLayout;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_IMAGE_VIEW view;
+ VK_GPU_MEMORY *mem;
+ VK_IMAGE_VIEW view;
int32_t tex_width, tex_height;
};
"lunarg-logo-256x256-solid.png"
};
-struct xglcube_vs_uniform {
+struct vkcube_vs_uniform {
// Must start with MVP
float mvp[4][4];
float position[12*3][4];
float color[12*3][4];
};
-struct xgltexcube_vs_uniform {
+struct vktexcube_vs_uniform {
// Must start with MVP
float mvp[4][4];
float position[12*3][4];
xcb_screen_t *screen;
bool use_staging_buffer;
- XGL_INSTANCE inst;
- XGL_PHYSICAL_GPU gpu;
- XGL_DEVICE device;
- XGL_QUEUE queue;
+ VK_INSTANCE inst;
+ VK_PHYSICAL_GPU gpu;
+ VK_DEVICE device;
+ VK_QUEUE queue;
uint32_t graphics_queue_node_index;
- XGL_PHYSICAL_GPU_PROPERTIES *gpu_props;
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
+ VK_PHYSICAL_GPU_PROPERTIES *gpu_props;
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
- XGL_FRAMEBUFFER framebuffer;
+ VK_FRAMEBUFFER framebuffer;
int width, height;
- XGL_FORMAT format;
+ VK_FORMAT format;
struct {
- XGL_IMAGE image;
- XGL_GPU_MEMORY mem;
- XGL_CMD_BUFFER cmd;
+ VK_IMAGE image;
+ VK_GPU_MEMORY mem;
+ VK_CMD_BUFFER cmd;
- XGL_COLOR_ATTACHMENT_VIEW view;
- XGL_FENCE fence;
+ VK_COLOR_ATTACHMENT_VIEW view;
+ VK_FENCE fence;
} buffers[DEMO_BUFFER_COUNT];
struct {
- XGL_FORMAT format;
+ VK_FORMAT format;
- XGL_IMAGE image;
+ VK_IMAGE image;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_DEPTH_STENCIL_VIEW view;
+ VK_GPU_MEMORY *mem;
+ VK_DEPTH_STENCIL_VIEW view;
} depth;
struct texture_object textures[DEMO_TEXTURE_COUNT];
struct {
- XGL_BUFFER buf;
+ VK_BUFFER buf;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_BUFFER_VIEW view;
- XGL_BUFFER_VIEW_ATTACH_INFO attach;
+ VK_GPU_MEMORY *mem;
+ VK_BUFFER_VIEW view;
+ VK_BUFFER_VIEW_ATTACH_INFO attach;
} uniform_data;
- XGL_CMD_BUFFER cmd; // Buffer for initialization commands
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN desc_layout_chain;
- XGL_DESCRIPTOR_SET_LAYOUT desc_layout;
- XGL_PIPELINE pipeline;
+ VK_CMD_BUFFER cmd; // Buffer for initialization commands
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN desc_layout_chain;
+ VK_DESCRIPTOR_SET_LAYOUT desc_layout;
+ VK_PIPELINE pipeline;
- XGL_DYNAMIC_VP_STATE_OBJECT viewport;
- XGL_DYNAMIC_RS_STATE_OBJECT raster;
- XGL_DYNAMIC_CB_STATE_OBJECT color_blend;
- XGL_DYNAMIC_DS_STATE_OBJECT depth_stencil;
+ VK_DYNAMIC_VP_STATE_OBJECT viewport;
+ VK_DYNAMIC_RS_STATE_OBJECT raster;
+ VK_DYNAMIC_CB_STATE_OBJECT color_blend;
+ VK_DYNAMIC_DS_STATE_OBJECT depth_stencil;
mat4x4 projection_matrix;
mat4x4 view_matrix;
float spin_increment;
bool pause;
- XGL_DESCRIPTOR_POOL desc_pool;
- XGL_DESCRIPTOR_SET desc_set;
+ VK_DESCRIPTOR_POOL desc_pool;
+ VK_DESCRIPTOR_SET desc_set;
xcb_window_t window;
xcb_intern_atom_reply_t *atom_wm_delete_window;
static void demo_flush_init_cmd(struct demo *demo)
{
- XGL_RESULT err;
+ VK_RESULT err;
- if (demo->cmd == XGL_NULL_HANDLE)
+ if (demo->cmd == VK_NULL_HANDLE)
return;
- err = xglEndCommandBuffer(demo->cmd);
+ err = vkEndCommandBuffer(demo->cmd);
assert(!err);
- const XGL_CMD_BUFFER cmd_bufs[] = { demo->cmd };
+ const VK_CMD_BUFFER cmd_bufs[] = { demo->cmd };
- err = xglQueueSubmit(demo->queue, 1, cmd_bufs, XGL_NULL_HANDLE);
+ err = vkQueueSubmit(demo->queue, 1, cmd_bufs, VK_NULL_HANDLE);
assert(!err);
- err = xglQueueWaitIdle(demo->queue);
+ err = vkQueueWaitIdle(demo->queue);
assert(!err);
- xglDestroyObject(demo->cmd);
- demo->cmd = XGL_NULL_HANDLE;
+ vkDestroyObject(demo->cmd);
+ demo->cmd = VK_NULL_HANDLE;
}
static void demo_add_mem_refs(
struct demo *demo,
- int num_refs, XGL_GPU_MEMORY *mem)
+ int num_refs, VK_GPU_MEMORY *mem)
{
for (int i = 0; i < num_refs; i++) {
- xglQueueAddMemReference(demo->queue, mem[i]);
+ vkQueueAddMemReference(demo->queue, mem[i]);
}
}
static void demo_remove_mem_refs(
struct demo *demo,
- int num_refs, XGL_GPU_MEMORY *mem)
+ int num_refs, VK_GPU_MEMORY *mem)
{
for (int i = 0; i < num_refs; i++) {
- xglQueueRemoveMemReference(demo->queue, mem[i]);
+ vkQueueRemoveMemReference(demo->queue, mem[i]);
}
}
static void demo_set_image_layout(
struct demo *demo,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT old_image_layout,
- XGL_IMAGE_LAYOUT new_image_layout)
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT old_image_layout,
+ VK_IMAGE_LAYOUT new_image_layout)
{
- XGL_RESULT err;
+ VK_RESULT err;
- if (demo->cmd == XGL_NULL_HANDLE) {
- const XGL_CMD_BUFFER_CREATE_INFO cmd = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
+ if (demo->cmd == VK_NULL_HANDLE) {
+ const VK_CMD_BUFFER_CREATE_INFO cmd = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
.pNext = NULL,
.queueNodeIndex = demo->graphics_queue_node_index,
.flags = 0,
};
- err = xglCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
+ err = vkCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
assert(!err);
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
.pNext = NULL,
- .flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
+ .flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
};
- err = xglBeginCommandBuffer(demo->cmd, &cmd_buf_info);
+ err = vkBeginCommandBuffer(demo->cmd, &cmd_buf_info);
}
- XGL_IMAGE_MEMORY_BARRIER image_memory_barrier = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
+ VK_IMAGE_MEMORY_BARRIER image_memory_barrier = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.pNext = NULL,
.outputMask = 0,
.inputMask = 0,
.oldLayout = old_image_layout,
.newLayout = new_image_layout,
.image = image,
- .subresourceRange = { XGL_IMAGE_ASPECT_COLOR, 0, 1, 0, 0 }
+ .subresourceRange = { VK_IMAGE_ASPECT_COLOR, 0, 1, 0, 0 }
};
- if (new_image_layout == XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL) {
+ if (new_image_layout == VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL) {
/* Make sure anything that was copying from this image has completed */
- image_memory_barrier.inputMask = XGL_MEMORY_INPUT_COPY_BIT;
+ image_memory_barrier.inputMask = VK_MEMORY_INPUT_COPY_BIT;
}
- if (new_image_layout == XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
+ if (new_image_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
/* Make sure any Copy or CPU writes to image are flushed */
- image_memory_barrier.outputMask = XGL_MEMORY_OUTPUT_COPY_BIT | XGL_MEMORY_OUTPUT_CPU_WRITE_BIT;
+ image_memory_barrier.outputMask = VK_MEMORY_OUTPUT_COPY_BIT | VK_MEMORY_OUTPUT_CPU_WRITE_BIT;
}
- XGL_IMAGE_MEMORY_BARRIER *pmemory_barrier = &image_memory_barrier;
+ VK_IMAGE_MEMORY_BARRIER *pmemory_barrier = &image_memory_barrier;
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_TOP_OF_PIPE };
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_TOP_OF_PIPE };
- XGL_PIPELINE_BARRIER pipeline_barrier;
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPELINE_BARRIER pipeline_barrier;
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.pNext = NULL;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
- xglCmdPipelineBarrier(demo->cmd, &pipeline_barrier);
+ vkCmdPipelineBarrier(demo->cmd, &pipeline_barrier);
}
-static void demo_draw_build_cmd(struct demo *demo, XGL_CMD_BUFFER cmd_buf)
+static void demo_draw_build_cmd(struct demo *demo, VK_CMD_BUFFER cmd_buf)
{
- const XGL_COLOR_ATTACHMENT_BIND_INFO color_attachment = {
+ const VK_COLOR_ATTACHMENT_BIND_INFO color_attachment = {
.view = demo->buffers[demo->current_buffer].view,
- .layout = XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
+ .layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
};
- const XGL_DEPTH_STENCIL_BIND_INFO depth_stencil = {
+ const VK_DEPTH_STENCIL_BIND_INFO depth_stencil = {
.view = demo->depth.view,
- .layout = XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
+ .layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
};
- const XGL_CLEAR_COLOR clear_color = {
+ const VK_CLEAR_COLOR clear_color = {
.color.floatColor = { 0.2f, 0.2f, 0.2f, 0.2f },
.useRawValue = false,
};
const float clear_depth = 1.0f;
- XGL_IMAGE_SUBRESOURCE_RANGE clear_range;
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
+ VK_IMAGE_SUBRESOURCE_RANGE clear_range;
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
.pNext = NULL,
- .flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
+ .flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
};
- XGL_RESULT err;
- XGL_ATTACHMENT_LOAD_OP load_op = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
- XGL_ATTACHMENT_STORE_OP store_op = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- const XGL_FRAMEBUFFER_CREATE_INFO fb_info = {
- .sType = XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
+ VK_RESULT err;
+ VK_ATTACHMENT_LOAD_OP load_op = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
+ VK_ATTACHMENT_STORE_OP store_op = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ const VK_FRAMEBUFFER_CREATE_INFO fb_info = {
+ .sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
.pNext = NULL,
.colorAttachmentCount = 1,
- .pColorAttachments = (XGL_COLOR_ATTACHMENT_BIND_INFO*) &color_attachment,
- .pDepthStencilAttachment = (XGL_DEPTH_STENCIL_BIND_INFO*) &depth_stencil,
+ .pColorAttachments = (VK_COLOR_ATTACHMENT_BIND_INFO*) &color_attachment,
+ .pDepthStencilAttachment = (VK_DEPTH_STENCIL_BIND_INFO*) &depth_stencil,
.sampleCount = 1,
.width = demo->width,
.height = demo->height,
.layers = 1,
};
- XGL_RENDER_PASS_CREATE_INFO rp_info;
- XGL_RENDER_PASS_BEGIN rp_begin;
+ VK_RENDER_PASS_CREATE_INFO rp_info;
+ VK_RENDER_PASS_BEGIN rp_begin;
memset(&rp_info, 0 , sizeof(rp_info));
- err = xglCreateFramebuffer(demo->device, &fb_info, &rp_begin.framebuffer);
+ err = vkCreateFramebuffer(demo->device, &fb_info, &rp_begin.framebuffer);
assert(!err);
- rp_info.sType = XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
+ rp_info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
rp_info.renderArea.extent.width = demo->width;
rp_info.renderArea.extent.height = demo->height;
rp_info.colorAttachmentCount = fb_info.colorAttachmentCount;
rp_info.pColorLoadOps = &load_op;
rp_info.pColorStoreOps = &store_op;
rp_info.pColorLoadClearValues = &clear_color;
- rp_info.depthStencilFormat = XGL_FMT_D16_UNORM;
+ rp_info.depthStencilFormat = VK_FMT_D16_UNORM;
rp_info.depthStencilLayout = depth_stencil.layout;
- rp_info.depthLoadOp = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
+ rp_info.depthLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
rp_info.depthLoadClearValue = clear_depth;
- rp_info.depthStoreOp = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- rp_info.stencilLoadOp = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
+ rp_info.depthStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ rp_info.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
rp_info.stencilLoadClearValue = 0;
- rp_info.stencilStoreOp = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- err = xglCreateRenderPass(demo->device, &rp_info, &rp_begin.renderPass);
+ rp_info.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ err = vkCreateRenderPass(demo->device, &rp_info, &rp_begin.renderPass);
assert(!err);
- err = xglBeginCommandBuffer(cmd_buf, &cmd_buf_info);
+ err = vkBeginCommandBuffer(cmd_buf, &cmd_buf_info);
assert(!err);
- xglCmdBindPipeline(cmd_buf, XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindPipeline(cmd_buf, VK_PIPELINE_BIND_POINT_GRAPHICS,
demo->pipeline);
- xglCmdBindDescriptorSets(cmd_buf, XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindDescriptorSets(cmd_buf, VK_PIPELINE_BIND_POINT_GRAPHICS,
demo->desc_layout_chain, 0, 1, &demo->desc_set, NULL);
- xglCmdBindDynamicStateObject(cmd_buf, XGL_STATE_BIND_VIEWPORT, demo->viewport);
- xglCmdBindDynamicStateObject(cmd_buf, XGL_STATE_BIND_RASTER, demo->raster);
- xglCmdBindDynamicStateObject(cmd_buf, XGL_STATE_BIND_COLOR_BLEND,
+ vkCmdBindDynamicStateObject(cmd_buf, VK_STATE_BIND_VIEWPORT, demo->viewport);
+ vkCmdBindDynamicStateObject(cmd_buf, VK_STATE_BIND_RASTER, demo->raster);
+ vkCmdBindDynamicStateObject(cmd_buf, VK_STATE_BIND_COLOR_BLEND,
demo->color_blend);
- xglCmdBindDynamicStateObject(cmd_buf, XGL_STATE_BIND_DEPTH_STENCIL,
+ vkCmdBindDynamicStateObject(cmd_buf, VK_STATE_BIND_DEPTH_STENCIL,
demo->depth_stencil);
- xglCmdBeginRenderPass(cmd_buf, &rp_begin);
- clear_range.aspect = XGL_IMAGE_ASPECT_COLOR;
+ vkCmdBeginRenderPass(cmd_buf, &rp_begin);
+ clear_range.aspect = VK_IMAGE_ASPECT_COLOR;
clear_range.baseMipLevel = 0;
clear_range.mipLevels = 1;
clear_range.baseArraySlice = 0;
clear_range.arraySize = 1;
- xglCmdClearColorImage(cmd_buf,
+ vkCmdClearColorImage(cmd_buf,
demo->buffers[demo->current_buffer].image,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_color, 1, &clear_range);
- clear_range.aspect = XGL_IMAGE_ASPECT_DEPTH;
- xglCmdClearDepthStencil(cmd_buf, demo->depth.image,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ clear_range.aspect = VK_IMAGE_ASPECT_DEPTH;
+ vkCmdClearDepthStencil(cmd_buf, demo->depth.image,
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_depth, 0, 1, &clear_range);
- xglCmdDraw(cmd_buf, 0, 12 * 3, 0, 1);
- xglCmdEndRenderPass(cmd_buf, rp_begin.renderPass);
+ vkCmdDraw(cmd_buf, 0, 12 * 3, 0, 1);
+ vkCmdEndRenderPass(cmd_buf, rp_begin.renderPass);
- err = xglEndCommandBuffer(cmd_buf);
+ err = vkEndCommandBuffer(cmd_buf);
assert(!err);
- xglDestroyObject(rp_begin.renderPass);
- xglDestroyObject(rp_begin.framebuffer);
+ vkDestroyObject(rp_begin.renderPass);
+ vkDestroyObject(rp_begin.framebuffer);
}
mat4x4 MVP, Model, VP;
int matrixSize = sizeof(MVP);
uint8_t *pData;
- XGL_RESULT err;
+ VK_RESULT err;
mat4x4_mul(VP, demo->projection_matrix, demo->view_matrix);
mat4x4_mul(MVP, VP, demo->model_matrix);
assert(demo->uniform_data.num_mem == 1);
- err = xglMapMemory(demo->uniform_data.mem[0], 0, (void **) &pData);
+ err = vkMapMemory(demo->uniform_data.mem[0], 0, (void **) &pData);
assert(!err);
memcpy(pData, (const void*) &MVP[0][0], matrixSize);
- err = xglUnmapMemory(demo->uniform_data.mem[0]);
+ err = vkUnmapMemory(demo->uniform_data.mem[0]);
assert(!err);
}
static void demo_draw(struct demo *demo)
{
- const XGL_WSI_X11_PRESENT_INFO present = {
+ const VK_WSI_X11_PRESENT_INFO present = {
.destWindow = demo->window,
.srcImage = demo->buffers[demo->current_buffer].image,
.async = true,
.flip = false,
};
- XGL_FENCE fence = demo->buffers[demo->current_buffer].fence;
- XGL_RESULT err;
+ VK_FENCE fence = demo->buffers[demo->current_buffer].fence;
+ VK_RESULT err;
- err = xglWaitForFences(demo->device, 1, &fence, XGL_TRUE, ~((uint64_t) 0));
- assert(err == XGL_SUCCESS || err == XGL_ERROR_UNAVAILABLE);
+ err = vkWaitForFences(demo->device, 1, &fence, VK_TRUE, ~((uint64_t) 0));
+ assert(err == VK_SUCCESS || err == VK_ERROR_UNAVAILABLE);
- err = xglQueueSubmit(demo->queue, 1, &demo->buffers[demo->current_buffer].cmd,
- XGL_NULL_HANDLE);
+ err = vkQueueSubmit(demo->queue, 1, &demo->buffers[demo->current_buffer].cmd,
+ VK_NULL_HANDLE);
assert(!err);
- err = xglWsiX11QueuePresent(demo->queue, &present, fence);
+ err = vkWsiX11QueuePresent(demo->queue, &present, fence);
assert(!err);
demo->current_buffer = (demo->current_buffer + 1) % DEMO_BUFFER_COUNT;
static void demo_prepare_buffers(struct demo *demo)
{
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image = {
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image = {
.format = demo->format,
- .usage = XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
+ .usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
.extent = {
.width = demo->width,
.height = demo->height,
},
.flags = 0,
};
- const XGL_FENCE_CREATE_INFO fence = {
- .sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO,
+ const VK_FENCE_CREATE_INFO fence = {
+ .sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,
.pNext = NULL,
.flags = 0,
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
for (i = 0; i < DEMO_BUFFER_COUNT; i++) {
- XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view = {
- .sType = XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
+ VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view = {
+ .sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
.pNext = NULL,
.format = demo->format,
.mipLevel = 0,
.arraySize = 1,
};
- err = xglWsiX11CreatePresentableImage(demo->device, &presentable_image,
+ err = vkWsiX11CreatePresentableImage(demo->device, &presentable_image,
&demo->buffers[i].image, &demo->buffers[i].mem);
assert(!err);
demo_add_mem_refs(demo, 1, &demo->buffers[i].mem);
demo_set_image_layout(demo, demo->buffers[i].image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
color_attachment_view.image = demo->buffers[i].image;
- err = xglCreateColorAttachmentView(demo->device,
+ err = vkCreateColorAttachmentView(demo->device,
&color_attachment_view, &demo->buffers[i].view);
assert(!err);
- err = xglCreateFence(demo->device,
+ err = vkCreateFence(demo->device,
&fence, &demo->buffers[i].fence);
assert(!err);
}
static void demo_prepare_depth(struct demo *demo)
{
- const XGL_FORMAT depth_format = XGL_FMT_D16_UNORM;
- const XGL_IMAGE_CREATE_INFO image = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ const VK_FORMAT depth_format = VK_FMT_D16_UNORM;
+ const VK_IMAGE_CREATE_INFO image = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
.pNext = NULL,
- .imageType = XGL_IMAGE_2D,
+ .imageType = VK_IMAGE_2D,
.format = depth_format,
.extent = { demo->width, demo->height, 1 },
.mipLevels = 1,
.arraySize = 1,
.samples = 1,
- .tiling = XGL_OPTIMAL_TILING,
- .usage = XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT,
+ .tiling = VK_OPTIMAL_TILING,
+ .usage = VK_IMAGE_USAGE_DEPTH_STENCIL_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &img_alloc,
.allocationSize = 0,
- .memProps = XGL_MEMORY_PROPERTY_GPU_ONLY,
- .memType = XGL_MEMORY_TYPE_IMAGE,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memProps = VK_MEMORY_PROPERTY_GPU_ONLY,
+ .memType = VK_MEMORY_TYPE_IMAGE,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_DEPTH_STENCIL_VIEW_CREATE_INFO view = {
- .sType = XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO,
+ VK_DEPTH_STENCIL_VIEW_CREATE_INFO view = {
+ .sType = VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO,
.pNext = NULL,
- .image = XGL_NULL_HANDLE,
+ .image = VK_NULL_HANDLE,
.mipLevel = 0,
.baseArraySlice = 0,
.arraySize = 1,
.flags = 0,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
- XGL_RESULT err;
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
+ VK_RESULT err;
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
demo->depth.format = depth_format;
/* create image */
- err = xglCreateImage(demo->device, &image,
+ err = vkCreateImage(demo->device, &image,
&demo->depth.image);
assert(!err);
- err = xglGetObjectInfo(demo->depth.image, XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT, &num_alloc_size, &num_allocations);
+ err = vkGetObjectInfo(demo->depth.image, VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT, &num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- demo->depth.mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ demo->depth.mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
demo->depth.num_mem = num_allocations;
- err = xglGetObjectInfo(demo->depth.image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->depth.image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
- assert(!err && mem_reqs_size == num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(demo->depth.image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ assert(!err && mem_reqs_size == num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(demo->depth.image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- assert(!err && img_reqs_size == sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ assert(!err && img_reqs_size == sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
mem_alloc.allocationSize = mem_reqs[i].size;
/* allocate memory */
- err = xglAllocMemory(demo->device, &mem_alloc,
+ err = vkAllocMemory(demo->device, &mem_alloc,
&(demo->depth.mem[i]));
assert(!err);
/* bind memory */
- err = xglBindObjectMemory(demo->depth.image, i,
+ err = vkBindObjectMemory(demo->depth.image, i,
demo->depth.mem[i], 0);
assert(!err);
}
demo_set_image_layout(demo, demo->depth.image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
demo_add_mem_refs(demo, demo->depth.num_mem, demo->depth.mem);
/* create image view */
view.image = demo->depth.image;
- err = xglCreateDepthStencilView(demo->device, &view,
+ err = vkCreateDepthStencilView(demo->device, &view,
&demo->depth.view);
assert(!err);
}
/** loadTexture
* loads a png file into an memory object, using cstdio , libpng.
*
- * \param demo : Needed to access XGL calls
+ * \param demo : Needed to access VK calls
* \param filename : the png file to be loaded
* \param width : width of png, to be updated as a side effect of this function
* \param height : height of png, to be updated as a side effect of this function
*
*/
bool loadTexture(const char *filename, uint8_t *rgba_data,
- XGL_SUBRESOURCE_LAYOUT *layout,
+ VK_SUBRESOURCE_LAYOUT *layout,
int32_t *width, int32_t *height)
{
//header for testing if it is a png
static void demo_prepare_texture_image(struct demo *demo,
const char *filename,
struct texture_object *tex_obj,
- XGL_IMAGE_TILING tiling,
- XGL_FLAGS mem_props)
+ VK_IMAGE_TILING tiling,
+ VK_FLAGS mem_props)
{
- const XGL_FORMAT tex_format = XGL_FMT_B8G8R8A8_UNORM;
+ const VK_FORMAT tex_format = VK_FMT_B8G8R8A8_UNORM;
int32_t tex_width;
int32_t tex_height;
- XGL_RESULT err;
+ VK_RESULT err;
err = loadTexture(filename, NULL, NULL, &tex_width, &tex_height);
assert(err);
tex_obj->tex_width = tex_width;
tex_obj->tex_height = tex_height;
- const XGL_IMAGE_CREATE_INFO image_create_info = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ const VK_IMAGE_CREATE_INFO image_create_info = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
.pNext = NULL,
- .imageType = XGL_IMAGE_2D,
+ .imageType = VK_IMAGE_2D,
.format = tex_format,
.extent = { tex_width, tex_height, 1 },
.mipLevels = 1,
.arraySize = 1,
.samples = 1,
.tiling = tiling,
- .usage = XGL_IMAGE_USAGE_TRANSFER_SOURCE_BIT,
+ .usage = VK_IMAGE_USAGE_TRANSFER_SOURCE_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
+ VK_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = &buf_alloc,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &img_alloc,
.allocationSize = 0,
.memProps = mem_props,
- .memType = XGL_MEMORY_TYPE_IMAGE,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memType = VK_MEMORY_TYPE_IMAGE,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
- size_t buf_reqs_size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
+ size_t buf_reqs_size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
- err = xglCreateImage(demo->device, &image_create_info,
+ err = vkCreateImage(demo->device, &image_create_info,
&tex_obj->image);
assert(!err);
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- tex_obj->mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ tex_obj->mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
- assert(!err && mem_reqs_size == num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ assert(!err && mem_reqs_size == num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- assert(!err && img_reqs_size == sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ assert(!err && img_reqs_size == sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
- mem_alloc.memProps = XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
+ mem_alloc.memProps = VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
for (uint32_t j = 0; j < num_allocations; j ++) {
mem_alloc.memType = mem_reqs[j].memType;
mem_alloc.allocationSize = mem_reqs[j].size;
- if (mem_alloc.memType == XGL_MEMORY_TYPE_BUFFER) {
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
+ if (mem_alloc.memType == VK_MEMORY_TYPE_BUFFER) {
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
&buf_reqs_size, &buf_reqs);
- assert(!err && buf_reqs_size == sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS));
+ assert(!err && buf_reqs_size == sizeof(VK_BUFFER_MEMORY_REQUIREMENTS));
buf_alloc.usage = buf_reqs.usage;
img_alloc.pNext = &buf_alloc;
} else {
}
/* allocate memory */
- err = xglAllocMemory(demo->device, &mem_alloc,
+ err = vkAllocMemory(demo->device, &mem_alloc,
&(tex_obj->mem[j]));
assert(!err);
/* bind memory */
- err = xglBindObjectMemory(tex_obj->image, j, tex_obj->mem[j], 0);
+ err = vkBindObjectMemory(tex_obj->image, j, tex_obj->mem[j], 0);
assert(!err);
}
free(mem_reqs);
tex_obj->num_mem = num_allocations;
- if (mem_props & XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT) {
- const XGL_IMAGE_SUBRESOURCE subres = {
- .aspect = XGL_IMAGE_ASPECT_COLOR,
+ if (mem_props & VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT) {
+ const VK_IMAGE_SUBRESOURCE subres = {
+ .aspect = VK_IMAGE_ASPECT_COLOR,
.mipLevel = 0,
.arraySlice = 0,
};
- XGL_SUBRESOURCE_LAYOUT layout;
- size_t layout_size = sizeof(XGL_SUBRESOURCE_LAYOUT);
+ VK_SUBRESOURCE_LAYOUT layout;
+ size_t layout_size = sizeof(VK_SUBRESOURCE_LAYOUT);
void *data;
- err = xglGetImageSubresourceInfo(tex_obj->image, &subres,
- XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ err = vkGetImageSubresourceInfo(tex_obj->image, &subres,
+ VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&layout_size, &layout);
assert(!err && layout_size == sizeof(layout));
/* Linear texture must be within a single memory object */
assert(num_allocations == 1);
- err = xglMapMemory(tex_obj->mem[0], 0, &data);
+ err = vkMapMemory(tex_obj->mem[0], 0, &data);
assert(!err);
if (!loadTexture(filename, data, &layout, &tex_width, &tex_height)) {
fprintf(stderr, "Error loading texture: %s\n", filename);
}
- err = xglUnmapMemory(tex_obj->mem[0]);
+ err = vkUnmapMemory(tex_obj->mem[0]);
assert(!err);
}
- tex_obj->imageLayout = XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
+ tex_obj->imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
demo_set_image_layout(demo, tex_obj->image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_UNDEFINED,
tex_obj->imageLayout);
/* setting the image layout does not reference the actual memory so no need to add a mem ref */
}
{
/* clean up staging resources */
for (uint32_t j = 0; j < tex_objs->num_mem; j ++) {
- xglBindObjectMemory(tex_objs->image, j, XGL_NULL_HANDLE, 0);
- xglFreeMemory(tex_objs->mem[j]);
+ vkBindObjectMemory(tex_objs->image, j, VK_NULL_HANDLE, 0);
+ vkFreeMemory(tex_objs->mem[j]);
}
free(tex_objs->mem);
- xglDestroyObject(tex_objs->image);
+ vkDestroyObject(tex_objs->image);
}
static void demo_prepare_textures(struct demo *demo)
{
- const XGL_FORMAT tex_format = XGL_FMT_R8G8B8A8_UNORM;
- XGL_FORMAT_PROPERTIES props;
+ const VK_FORMAT tex_format = VK_FMT_R8G8B8A8_UNORM;
+ VK_FORMAT_PROPERTIES props;
size_t size = sizeof(props);
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
- err = xglGetFormatInfo(demo->device, tex_format,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(demo->device, tex_format,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &props);
assert(!err);
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- if (props.linearTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT && !demo->use_staging_buffer) {
+ if (props.linearTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT && !demo->use_staging_buffer) {
/* Device can texture using linear textures */
demo_prepare_texture_image(demo, tex_files[i], &demo->textures[i],
- XGL_LINEAR_TILING, XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
- } else if (props.optimalTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT) {
+ VK_LINEAR_TILING, VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
+ } else if (props.optimalTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT) {
/* Must use staging buffer to copy linear texture to optimized */
struct texture_object staging_texture;
memset(&staging_texture, 0, sizeof(staging_texture));
demo_prepare_texture_image(demo, tex_files[i], &staging_texture,
- XGL_LINEAR_TILING, XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
+ VK_LINEAR_TILING, VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
demo_prepare_texture_image(demo, tex_files[i], &demo->textures[i],
- XGL_OPTIMAL_TILING, XGL_MEMORY_PROPERTY_GPU_ONLY);
+ VK_OPTIMAL_TILING, VK_MEMORY_PROPERTY_GPU_ONLY);
demo_set_image_layout(demo, staging_texture.image,
staging_texture.imageLayout,
- XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
demo_set_image_layout(demo, demo->textures[i].image,
demo->textures[i].imageLayout,
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
- XGL_IMAGE_COPY copy_region = {
- .srcSubresource = { XGL_IMAGE_ASPECT_COLOR, 0, 0 },
+ VK_IMAGE_COPY copy_region = {
+ .srcSubresource = { VK_IMAGE_ASPECT_COLOR, 0, 0 },
.srcOffset = { 0, 0, 0 },
- .destSubresource = { XGL_IMAGE_ASPECT_COLOR, 0, 0 },
+ .destSubresource = { VK_IMAGE_ASPECT_COLOR, 0, 0 },
.destOffset = { 0, 0, 0 },
.extent = { staging_texture.tex_width, staging_texture.tex_height, 1 },
};
- xglCmdCopyImage(demo->cmd,
- staging_texture.image, XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
- demo->textures[i].image, XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ vkCmdCopyImage(demo->cmd,
+ staging_texture.image, VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
+ demo->textures[i].image, VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
1, ©_region);
demo_add_mem_refs(demo, staging_texture.num_mem, staging_texture.mem);
demo_add_mem_refs(demo, demo->textures[i].num_mem, demo->textures[i].mem);
demo_set_image_layout(demo, demo->textures[i].image,
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
demo->textures[i].imageLayout);
demo_flush_init_cmd(demo);
demo_destroy_texture_image(&staging_texture);
demo_remove_mem_refs(demo, staging_texture.num_mem, staging_texture.mem);
} else {
- /* Can't support XGL_FMT_B8G8R8A8_UNORM !? */
+ /* Can't support VK_FMT_B8G8R8A8_UNORM !? */
assert(!"No support for tB8G8R8A8_UNORM as texture image format");
}
- const XGL_SAMPLER_CREATE_INFO sampler = {
- .sType = XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
+ const VK_SAMPLER_CREATE_INFO sampler = {
+ .sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
.pNext = NULL,
- .magFilter = XGL_TEX_FILTER_NEAREST,
- .minFilter = XGL_TEX_FILTER_NEAREST,
- .mipMode = XGL_TEX_MIPMAP_BASE,
- .addressU = XGL_TEX_ADDRESS_CLAMP,
- .addressV = XGL_TEX_ADDRESS_CLAMP,
- .addressW = XGL_TEX_ADDRESS_CLAMP,
+ .magFilter = VK_TEX_FILTER_NEAREST,
+ .minFilter = VK_TEX_FILTER_NEAREST,
+ .mipMode = VK_TEX_MIPMAP_BASE,
+ .addressU = VK_TEX_ADDRESS_CLAMP,
+ .addressV = VK_TEX_ADDRESS_CLAMP,
+ .addressW = VK_TEX_ADDRESS_CLAMP,
.mipLodBias = 0.0f,
.maxAnisotropy = 1,
- .compareFunc = XGL_COMPARE_NEVER,
+ .compareFunc = VK_COMPARE_NEVER,
.minLod = 0.0f,
.maxLod = 0.0f,
- .borderColorType = XGL_BORDER_COLOR_OPAQUE_WHITE,
+ .borderColorType = VK_BORDER_COLOR_OPAQUE_WHITE,
};
- XGL_IMAGE_VIEW_CREATE_INFO view = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
+ VK_IMAGE_VIEW_CREATE_INFO view = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
.pNext = NULL,
- .image = XGL_NULL_HANDLE,
- .viewType = XGL_IMAGE_VIEW_2D,
+ .image = VK_NULL_HANDLE,
+ .viewType = VK_IMAGE_VIEW_2D,
.format = tex_format,
- .channels = { XGL_CHANNEL_SWIZZLE_R,
- XGL_CHANNEL_SWIZZLE_G,
- XGL_CHANNEL_SWIZZLE_B,
- XGL_CHANNEL_SWIZZLE_A, },
- .subresourceRange = { XGL_IMAGE_ASPECT_COLOR, 0, 1, 0, 1 },
+ .channels = { VK_CHANNEL_SWIZZLE_R,
+ VK_CHANNEL_SWIZZLE_G,
+ VK_CHANNEL_SWIZZLE_B,
+ VK_CHANNEL_SWIZZLE_A, },
+ .subresourceRange = { VK_IMAGE_ASPECT_COLOR, 0, 1, 0, 1 },
.minLod = 0.0f,
};
/* create sampler */
- err = xglCreateSampler(demo->device, &sampler,
+ err = vkCreateSampler(demo->device, &sampler,
&demo->textures[i].sampler);
assert(!err);
/* create image view */
view.image = demo->textures[i].image;
- err = xglCreateImageView(demo->device, &view,
+ err = vkCreateImageView(demo->device, &view,
&demo->textures[i].view);
assert(!err);
}
void demo_prepare_cube_data_buffer(struct demo *demo)
{
- XGL_BUFFER_CREATE_INFO buf_info;
- XGL_BUFFER_VIEW_CREATE_INFO view_info;
- XGL_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
+ VK_BUFFER_CREATE_INFO buf_info;
+ VK_BUFFER_VIEW_CREATE_INFO view_info;
+ VK_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO alloc_info = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO alloc_info = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &buf_alloc,
.allocationSize = 0,
- .memProps = XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT,
- .memType = XGL_MEMORY_TYPE_BUFFER,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memProps = VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT,
+ .memType = VK_MEMORY_TYPE_BUFFER,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
- size_t buf_reqs_size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
+ size_t buf_reqs_size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
uint8_t *pData;
int i;
mat4x4 MVP, VP;
- XGL_RESULT err;
- struct xgltexcube_vs_uniform data;
+ VK_RESULT err;
+ struct vktexcube_vs_uniform data;
mat4x4_mul(VP, demo->projection_matrix, demo->view_matrix);
mat4x4_mul(MVP, VP, demo->model_matrix);
}
memset(&buf_info, 0, sizeof(buf_info));
- buf_info.sType = XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
+ buf_info.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
buf_info.size = sizeof(data);
- buf_info.usage = XGL_BUFFER_USAGE_UNIFORM_READ_BIT;
- err = xglCreateBuffer(demo->device, &buf_info, &demo->uniform_data.buf);
+ buf_info.usage = VK_BUFFER_USAGE_UNIFORM_READ_BIT;
+ err = vkCreateBuffer(demo->device, &buf_info, &demo->uniform_data.buf);
assert(!err);
- err = xglGetObjectInfo(demo->uniform_data.buf,
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(demo->uniform_data.buf,
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- demo->uniform_data.mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ demo->uniform_data.mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
demo->uniform_data.num_mem = num_allocations;
- err = xglGetObjectInfo(demo->uniform_data.buf,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->uniform_data.buf,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
assert(!err && mem_reqs_size == num_allocations * sizeof(*mem_reqs));
- err = xglGetObjectInfo(demo->uniform_data.buf,
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->uniform_data.buf,
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
&buf_reqs_size, &buf_reqs);
- assert(!err && buf_reqs_size == sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS));
+ assert(!err && buf_reqs_size == sizeof(VK_BUFFER_MEMORY_REQUIREMENTS));
buf_alloc.usage = buf_reqs.usage;
for (uint32_t i = 0; i < num_allocations; i ++) {
alloc_info.allocationSize = mem_reqs[i].size;
- err = xglAllocMemory(demo->device, &alloc_info, &(demo->uniform_data.mem[i]));
+ err = vkAllocMemory(demo->device, &alloc_info, &(demo->uniform_data.mem[i]));
assert(!err);
- err = xglMapMemory(demo->uniform_data.mem[i], 0, (void **) &pData);
+ err = vkMapMemory(demo->uniform_data.mem[i], 0, (void **) &pData);
assert(!err);
memcpy(pData, &data, (size_t)alloc_info.allocationSize);
- err = xglUnmapMemory(demo->uniform_data.mem[i]);
+ err = vkUnmapMemory(demo->uniform_data.mem[i]);
assert(!err);
- err = xglBindObjectMemory(demo->uniform_data.buf, i,
+ err = vkBindObjectMemory(demo->uniform_data.buf, i,
demo->uniform_data.mem[i], 0);
assert(!err);
}
demo_add_mem_refs(demo, demo->uniform_data.num_mem, demo->uniform_data.mem);
memset(&view_info, 0, sizeof(view_info));
- view_info.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
+ view_info.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO;
view_info.buffer = demo->uniform_data.buf;
- view_info.viewType = XGL_BUFFER_VIEW_RAW;
+ view_info.viewType = VK_BUFFER_VIEW_RAW;
view_info.offset = 0;
view_info.range = sizeof(data);
- err = xglCreateBufferView(demo->device, &view_info, &demo->uniform_data.view);
+ err = vkCreateBufferView(demo->device, &view_info, &demo->uniform_data.view);
assert(!err);
- demo->uniform_data.attach.sType = XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
+ demo->uniform_data.attach.sType = VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO;
demo->uniform_data.attach.view = demo->uniform_data.view;
}
static void demo_prepare_descriptor_layout(struct demo *demo)
{
- const XGL_DESCRIPTOR_SET_LAYOUT_BINDING layout_bindings[2] = {
+ const VK_DESCRIPTOR_SET_LAYOUT_BINDING layout_bindings[2] = {
[0] = {
- .descriptorType = XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
+ .descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
.count = 1,
- .stageFlags = XGL_SHADER_STAGE_FLAGS_VERTEX_BIT,
+ .stageFlags = VK_SHADER_STAGE_FLAGS_VERTEX_BIT,
.pImmutableSamplers = NULL,
},
[1] = {
- .descriptorType = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ .descriptorType = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
.count = DEMO_TEXTURE_COUNT,
- .stageFlags = XGL_SHADER_STAGE_FLAGS_FRAGMENT_BIT,
+ .stageFlags = VK_SHADER_STAGE_FLAGS_FRAGMENT_BIT,
.pImmutableSamplers = NULL,
},
};
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO descriptor_layout = {
- .sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO descriptor_layout = {
+ .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
.pNext = NULL,
.count = 2,
.pBinding = layout_bindings,
};
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateDescriptorSetLayout(demo->device,
+ err = vkCreateDescriptorSetLayout(demo->device,
&descriptor_layout, &demo->desc_layout);
assert(!err);
- err = xglCreateDescriptorSetLayoutChain(demo->device,
+ err = vkCreateDescriptorSetLayoutChain(demo->device,
1, &demo->desc_layout, &demo->desc_layout_chain);
assert(!err);
}
-static XGL_SHADER demo_prepare_shader(struct demo *demo,
- XGL_PIPELINE_SHADER_STAGE stage,
+static VK_SHADER demo_prepare_shader(struct demo *demo,
+ VK_PIPELINE_SHADER_STAGE stage,
const void *code,
size_t size)
{
- XGL_SHADER_CREATE_INFO createInfo;
- XGL_SHADER shader;
- XGL_RESULT err;
+ VK_SHADER_CREATE_INFO createInfo;
+ VK_SHADER shader;
+ VK_RESULT err;
- createInfo.sType = XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO;
+ createInfo.sType = VK_STRUCTURE_TYPE_SHADER_CREATE_INFO;
createInfo.pNext = NULL;
#ifdef EXTERNAL_SPV
createInfo.pCode = code;
createInfo.flags = 0;
- err = xglCreateShader(demo->device, &createInfo, &shader);
+ err = vkCreateShader(demo->device, &createInfo, &shader);
if (err) {
free((void *) createInfo.pCode);
}
createInfo.pCode = malloc(createInfo.codeSize);
createInfo.flags = 0;
- /* try version 0 first: XGL_PIPELINE_SHADER_STAGE followed by GLSL */
+ /* try version 0 first: VK_PIPELINE_SHADER_STAGE followed by GLSL */
((uint32_t *) createInfo.pCode)[0] = ICD_SPV_MAGIC;
((uint32_t *) createInfo.pCode)[1] = 0;
((uint32_t *) createInfo.pCode)[2] = stage;
memcpy(((uint32_t *) createInfo.pCode + 3), code, size + 1);
- err = xglCreateShader(demo->device, &createInfo, &shader);
+ err = vkCreateShader(demo->device, &createInfo, &shader);
if (err) {
free((void *) createInfo.pCode);
return NULL;
return shader_code;
}
-static XGL_SHADER demo_prepare_vs(struct demo *demo)
+static VK_SHADER demo_prepare_vs(struct demo *demo)
{
#ifdef EXTERNAL_SPV
void *vertShaderCode;
vertShaderCode = demo_read_spv("cube-vert.spv", &size);
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_VERTEX,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_VERTEX,
vertShaderCode, size);
#else
static const char *vertShaderText =
" gl_Position = ubuf.MVP * ubuf.position[gl_VertexID];\n"
"}\n";
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_VERTEX,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_VERTEX,
(const void *) vertShaderText,
strlen(vertShaderText));
#endif
}
-static XGL_SHADER demo_prepare_fs(struct demo *demo)
+static VK_SHADER demo_prepare_fs(struct demo *demo)
{
#ifdef EXTERNAL_SPV
void *fragShaderCode;
fragShaderCode = demo_read_spv("cube-frag.spv", &size);
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_FRAGMENT,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_FRAGMENT,
fragShaderCode, size);
#else
static const char *fragShaderText =
" gl_FragColor = texture(tex, texcoord.xy);\n"
"}\n";
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_FRAGMENT,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_FRAGMENT,
(const void *) fragShaderText,
strlen(fragShaderText));
#endif
static void demo_prepare_pipeline(struct demo *demo)
{
- XGL_GRAPHICS_PIPELINE_CREATE_INFO pipeline;
- XGL_PIPELINE_IA_STATE_CREATE_INFO ia;
- XGL_PIPELINE_RS_STATE_CREATE_INFO rs;
- XGL_PIPELINE_CB_STATE_CREATE_INFO cb;
- XGL_PIPELINE_DS_STATE_CREATE_INFO ds;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO vs;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO fs;
- XGL_PIPELINE_VP_STATE_CREATE_INFO vp;
- XGL_PIPELINE_MS_STATE_CREATE_INFO ms;
- XGL_RESULT err;
+ VK_GRAPHICS_PIPELINE_CREATE_INFO pipeline;
+ VK_PIPELINE_IA_STATE_CREATE_INFO ia;
+ VK_PIPELINE_RS_STATE_CREATE_INFO rs;
+ VK_PIPELINE_CB_STATE_CREATE_INFO cb;
+ VK_PIPELINE_DS_STATE_CREATE_INFO ds;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO vs;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO fs;
+ VK_PIPELINE_VP_STATE_CREATE_INFO vp;
+ VK_PIPELINE_MS_STATE_CREATE_INFO ms;
+ VK_RESULT err;
memset(&pipeline, 0, sizeof(pipeline));
- pipeline.sType = XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
+ pipeline.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
pipeline.pSetLayoutChain = demo->desc_layout_chain;
memset(&ia, 0, sizeof(ia));
- ia.sType = XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
- ia.topology = XGL_TOPOLOGY_TRIANGLE_LIST;
+ ia.sType = VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
+ ia.topology = VK_TOPOLOGY_TRIANGLE_LIST;
memset(&rs, 0, sizeof(rs));
- rs.sType = XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
- rs.fillMode = XGL_FILL_SOLID;
- rs.cullMode = XGL_CULL_BACK;
- rs.frontFace = XGL_FRONT_FACE_CCW;
+ rs.sType = VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
+ rs.fillMode = VK_FILL_SOLID;
+ rs.cullMode = VK_CULL_BACK;
+ rs.frontFace = VK_FRONT_FACE_CCW;
memset(&cb, 0, sizeof(cb));
- cb.sType = XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
- XGL_PIPELINE_CB_ATTACHMENT_STATE att_state[1];
+ cb.sType = VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
+ VK_PIPELINE_CB_ATTACHMENT_STATE att_state[1];
memset(att_state, 0, sizeof(att_state));
att_state[0].format = demo->format;
att_state[0].channelWriteMask = 0xf;
- att_state[0].blendEnable = XGL_FALSE;
+ att_state[0].blendEnable = VK_FALSE;
cb.attachmentCount = 1;
cb.pAttachments = att_state;
memset(&vp, 0, sizeof(vp));
- vp.sType = XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO;
+ vp.sType = VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO;
vp.numViewports = 1;
- vp.clipOrigin = XGL_COORDINATE_ORIGIN_LOWER_LEFT;
+ vp.clipOrigin = VK_COORDINATE_ORIGIN_LOWER_LEFT;
memset(&ds, 0, sizeof(ds));
- ds.sType = XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
+ ds.sType = VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
ds.format = demo->depth.format;
- ds.depthTestEnable = XGL_TRUE;
- ds.depthWriteEnable = XGL_TRUE;
- ds.depthFunc = XGL_COMPARE_LESS_EQUAL;
- ds.depthBoundsEnable = XGL_FALSE;
- ds.back.stencilFailOp = XGL_STENCIL_OP_KEEP;
- ds.back.stencilPassOp = XGL_STENCIL_OP_KEEP;
- ds.back.stencilFunc = XGL_COMPARE_ALWAYS;
- ds.stencilTestEnable = XGL_FALSE;
+ ds.depthTestEnable = VK_TRUE;
+ ds.depthWriteEnable = VK_TRUE;
+ ds.depthFunc = VK_COMPARE_LESS_EQUAL;
+ ds.depthBoundsEnable = VK_FALSE;
+ ds.back.stencilFailOp = VK_STENCIL_OP_KEEP;
+ ds.back.stencilPassOp = VK_STENCIL_OP_KEEP;
+ ds.back.stencilFunc = VK_COMPARE_ALWAYS;
+ ds.stencilTestEnable = VK_FALSE;
ds.front = ds.back;
memset(&vs, 0, sizeof(vs));
- vs.sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
- vs.shader.stage = XGL_SHADER_STAGE_VERTEX;
+ vs.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ vs.shader.stage = VK_SHADER_STAGE_VERTEX;
vs.shader.shader = demo_prepare_vs(demo);
assert(vs.shader.shader != NULL);
memset(&fs, 0, sizeof(fs));
- fs.sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
- fs.shader.stage = XGL_SHADER_STAGE_FRAGMENT;
+ fs.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ fs.shader.stage = VK_SHADER_STAGE_FRAGMENT;
fs.shader.shader = demo_prepare_fs(demo);
assert(fs.shader.shader != NULL);
memset(&ms, 0, sizeof(ms));
- ms.sType = XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
+ ms.sType = VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
ms.sampleMask = 1;
- ms.multisampleEnable = XGL_FALSE;
+ ms.multisampleEnable = VK_FALSE;
ms.samples = 1;
pipeline.pNext = (const void *) &ia;
ds.pNext = (const void *) &vs;
vs.pNext = (const void *) &fs;
- err = xglCreateGraphicsPipeline(demo->device, &pipeline, &demo->pipeline);
+ err = vkCreateGraphicsPipeline(demo->device, &pipeline, &demo->pipeline);
assert(!err);
- xglDestroyObject(vs.shader.shader);
- xglDestroyObject(fs.shader.shader);
+ vkDestroyObject(vs.shader.shader);
+ vkDestroyObject(fs.shader.shader);
}
static void demo_prepare_dynamic_states(struct demo *demo)
{
- XGL_DYNAMIC_VP_STATE_CREATE_INFO viewport_create;
- XGL_DYNAMIC_RS_STATE_CREATE_INFO raster;
- XGL_DYNAMIC_CB_STATE_CREATE_INFO color_blend;
- XGL_DYNAMIC_DS_STATE_CREATE_INFO depth_stencil;
- XGL_RESULT err;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO viewport_create;
+ VK_DYNAMIC_RS_STATE_CREATE_INFO raster;
+ VK_DYNAMIC_CB_STATE_CREATE_INFO color_blend;
+ VK_DYNAMIC_DS_STATE_CREATE_INFO depth_stencil;
+ VK_RESULT err;
memset(&viewport_create, 0, sizeof(viewport_create));
- viewport_create.sType = XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
+ viewport_create.sType = VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
viewport_create.viewportAndScissorCount = 1;
- XGL_VIEWPORT viewport;
+ VK_VIEWPORT viewport;
memset(&viewport, 0, sizeof(viewport));
viewport.height = (float) demo->height;
viewport.width = (float) demo->width;
viewport.minDepth = (float) 0.0f;
viewport.maxDepth = (float) 1.0f;
viewport_create.pViewports = &viewport;
- XGL_RECT scissor;
+ VK_RECT scissor;
memset(&scissor, 0, sizeof(scissor));
scissor.extent.width = demo->width;
scissor.extent.height = demo->height;
viewport_create.pScissors = &scissor;
memset(&raster, 0, sizeof(raster));
- raster.sType = XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
+ raster.sType = VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
raster.pointSize = 1.0;
raster.lineWidth = 1.0;
memset(&color_blend, 0, sizeof(color_blend));
- color_blend.sType = XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
+ color_blend.sType = VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
color_blend.blendConst[0] = 1.0f;
color_blend.blendConst[1] = 1.0f;
color_blend.blendConst[2] = 1.0f;
color_blend.blendConst[3] = 1.0f;
memset(&depth_stencil, 0, sizeof(depth_stencil));
- depth_stencil.sType = XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
+ depth_stencil.sType = VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
depth_stencil.minDepth = 0.0f;
depth_stencil.maxDepth = 1.0f;
depth_stencil.stencilBackRef = 0;
depth_stencil.stencilReadMask = 0xff;
depth_stencil.stencilWriteMask = 0xff;
- err = xglCreateDynamicViewportState(demo->device, &viewport_create, &demo->viewport);
+ err = vkCreateDynamicViewportState(demo->device, &viewport_create, &demo->viewport);
assert(!err);
- err = xglCreateDynamicRasterState(demo->device, &raster, &demo->raster);
+ err = vkCreateDynamicRasterState(demo->device, &raster, &demo->raster);
assert(!err);
- err = xglCreateDynamicColorBlendState(demo->device,
+ err = vkCreateDynamicColorBlendState(demo->device,
&color_blend, &demo->color_blend);
assert(!err);
- err = xglCreateDynamicDepthStencilState(demo->device,
+ err = vkCreateDynamicDepthStencilState(demo->device,
&depth_stencil, &demo->depth_stencil);
assert(!err);
}
static void demo_prepare_descriptor_pool(struct demo *demo)
{
- const XGL_DESCRIPTOR_TYPE_COUNT type_counts[2] = {
+ const VK_DESCRIPTOR_TYPE_COUNT type_counts[2] = {
[0] = {
- .type = XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
+ .type = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
.count = 1,
},
[1] = {
- .type = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ .type = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
.count = DEMO_TEXTURE_COUNT,
},
};
- const XGL_DESCRIPTOR_POOL_CREATE_INFO descriptor_pool = {
- .sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO descriptor_pool = {
+ .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO,
.pNext = NULL,
.count = 2,
.pTypeCount = type_counts,
};
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateDescriptorPool(demo->device,
- XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1,
+ err = vkCreateDescriptorPool(demo->device,
+ VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1,
&descriptor_pool, &demo->desc_pool);
assert(!err);
}
static void demo_prepare_descriptor_set(struct demo *demo)
{
- XGL_IMAGE_VIEW_ATTACH_INFO view_info[DEMO_TEXTURE_COUNT];
- XGL_SAMPLER_IMAGE_VIEW_INFO combined_info[DEMO_TEXTURE_COUNT];
- XGL_UPDATE_SAMPLER_TEXTURES update_fs;
- XGL_UPDATE_BUFFERS update_vs;
+ VK_IMAGE_VIEW_ATTACH_INFO view_info[DEMO_TEXTURE_COUNT];
+ VK_SAMPLER_IMAGE_VIEW_INFO combined_info[DEMO_TEXTURE_COUNT];
+ VK_UPDATE_SAMPLER_TEXTURES update_fs;
+ VK_UPDATE_BUFFERS update_vs;
const void *update_array[2] = { &update_vs, &update_fs };
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t count;
uint32_t i;
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- view_info[i].sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
+ view_info[i].sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
view_info[i].pNext = NULL;
view_info[i].view = demo->textures[i].view,
- view_info[i].layout = XGL_IMAGE_LAYOUT_GENERAL;
+ view_info[i].layout = VK_IMAGE_LAYOUT_GENERAL;
combined_info[i].sampler = demo->textures[i].sampler;
combined_info[i].pImageView = &view_info[i];
}
memset(&update_vs, 0, sizeof(update_vs));
- update_vs.sType = XGL_STRUCTURE_TYPE_UPDATE_BUFFERS;
+ update_vs.sType = VK_STRUCTURE_TYPE_UPDATE_BUFFERS;
update_vs.pNext = &update_fs;
- update_vs.descriptorType = XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
+ update_vs.descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
update_vs.count = 1;
update_vs.pBufferViews = &demo->uniform_data.attach;
memset(&update_fs, 0, sizeof(update_fs));
- update_fs.sType = XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
+ update_fs.sType = VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
update_fs.binding = 1;
update_fs.count = DEMO_TEXTURE_COUNT;
update_fs.pSamplerImageViews = combined_info;
- err = xglAllocDescriptorSets(demo->desc_pool,
- XGL_DESCRIPTOR_SET_USAGE_STATIC,
+ err = vkAllocDescriptorSets(demo->desc_pool,
+ VK_DESCRIPTOR_SET_USAGE_STATIC,
1, &demo->desc_layout,
&demo->desc_set, &count);
assert(!err && count == 1);
- xglBeginDescriptorPoolUpdate(demo->device,
- XGL_DESCRIPTOR_UPDATE_MODE_FASTEST);
+ vkBeginDescriptorPoolUpdate(demo->device,
+ VK_DESCRIPTOR_UPDATE_MODE_FASTEST);
- xglClearDescriptorSets(demo->desc_pool, 1, &demo->desc_set);
- xglUpdateDescriptors(demo->desc_set, 2, update_array);
+ vkClearDescriptorSets(demo->desc_pool, 1, &demo->desc_set);
+ vkUpdateDescriptors(demo->desc_set, 2, update_array);
- xglEndDescriptorPoolUpdate(demo->device, demo->buffers[demo->current_buffer].cmd);
+ vkEndDescriptorPoolUpdate(demo->device, demo->buffers[demo->current_buffer].cmd);
}
static void demo_prepare(struct demo *demo)
{
- const XGL_CMD_BUFFER_CREATE_INFO cmd = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
+ const VK_CMD_BUFFER_CREATE_INFO cmd = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
.pNext = NULL,
.queueNodeIndex = demo->graphics_queue_node_index,
.flags = 0,
};
- XGL_RESULT err;
+ VK_RESULT err;
demo_prepare_buffers(demo);
demo_prepare_depth(demo);
demo_prepare_dynamic_states(demo);
for (int i = 0; i < DEMO_BUFFER_COUNT; i++) {
- err = xglCreateCommandBuffer(demo->device, &cmd, &demo->buffers[i].cmd);
+ err = vkCreateCommandBuffer(demo->device, &cmd, &demo->buffers[i].cmd);
assert(!err);
}
}
// Wait for work to finish before updating MVP.
- xglDeviceWaitIdle(demo->device);
+ vkDeviceWaitIdle(demo->device);
demo_update_data_buffer(demo);
demo_draw(demo);
// Wait for work to finish before updating MVP.
- xglDeviceWaitIdle(demo->device);
+ vkDeviceWaitIdle(demo->device);
}
}
xcb_map_window(demo->connection, demo->window);
}
-static void demo_init_xgl(struct demo *demo)
+static void demo_init_vk(struct demo *demo)
{
- const XGL_APPLICATION_INFO app = {
- .sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO,
+ const VK_APPLICATION_INFO app = {
+ .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.pNext = NULL,
.pAppName = "cube",
.appVersion = 0,
.pEngineName = "cube",
.engineVersion = 0,
- .apiVersion = XGL_API_VERSION,
+ .apiVersion = VK_API_VERSION,
};
- const XGL_INSTANCE_CREATE_INFO inst_info = {
- .sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ const VK_INSTANCE_CREATE_INFO inst_info = {
+ .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
.pNext = NULL,
.pAppInfo = &app,
.pAllocCb = NULL,
.extensionCount = 0,
.ppEnabledExtensionNames = NULL,
};
- const XGL_WSI_X11_CONNECTION_INFO connection = {
+ const VK_WSI_X11_CONNECTION_INFO connection = {
.pConnection = demo->connection,
.root = demo->screen->root,
.provider = 0,
};
- const XGL_DEVICE_QUEUE_CREATE_INFO queue = {
+ const VK_DEVICE_QUEUE_CREATE_INFO queue = {
.queueNodeIndex = 0,
.queueCount = 1,
};
const char *ext_names[] = {
- "XGL_WSI_X11",
+ "VK_WSI_X11",
};
- const XGL_DEVICE_CREATE_INFO device = {
- .sType = XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
+ const VK_DEVICE_CREATE_INFO device = {
+ .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = NULL,
.queueRecordCount = 1,
.pRequestedQueues = &queue,
.extensionCount = 1,
.ppEnabledExtensionNames = ext_names,
- .maxValidationLevel = XGL_VALIDATION_LEVEL_END_RANGE,
- .flags = XGL_DEVICE_CREATE_VALIDATION_BIT,
+ .maxValidationLevel = VK_VALIDATION_LEVEL_END_RANGE,
+ .flags = VK_DEVICE_CREATE_VALIDATION_BIT,
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t gpu_count;
uint32_t i;
size_t data_size;
uint32_t queue_count;
- err = xglCreateInstance(&inst_info, &demo->inst);
- if (err == XGL_ERROR_INCOMPATIBLE_DRIVER) {
+ err = vkCreateInstance(&inst_info, &demo->inst);
+ if (err == VK_ERROR_INCOMPATIBLE_DRIVER) {
printf("Cannot find a compatible Vulkan installable client driver "
"(ICD).\nExiting ...\n");
fflush(stdout);
assert(!err);
}
- err = xglEnumerateGpus(demo->inst, 1, &gpu_count, &demo->gpu);
+ err = vkEnumerateGpus(demo->inst, 1, &gpu_count, &demo->gpu);
assert(!err && gpu_count == 1);
for (i = 0; i < device.extensionCount; i++) {
- err = xglGetExtensionSupport(demo->gpu, ext_names[i]);
+ err = vkGetExtensionSupport(demo->gpu, ext_names[i]);
assert(!err);
}
- err = xglWsiX11AssociateConnection(demo->gpu, &connection);
+ err = vkWsiX11AssociateConnection(demo->gpu, &connection);
assert(!err);
- err = xglCreateDevice(demo->gpu, &device, &demo->device);
+ err = vkCreateDevice(demo->gpu, &device, &demo->device);
assert(!err);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&data_size, NULL);
assert(!err);
- demo->gpu_props = (XGL_PHYSICAL_GPU_PROPERTIES *) malloc(data_size);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ demo->gpu_props = (VK_PHYSICAL_GPU_PROPERTIES *) malloc(data_size);
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&data_size, demo->gpu_props);
assert(!err);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, NULL);
assert(!err);
- demo->queue_props = (XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *) malloc(data_size);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ demo->queue_props = (VK_PHYSICAL_GPU_QUEUE_PROPERTIES *) malloc(data_size);
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, demo->queue_props);
assert(!err);
- queue_count = (uint32_t)(data_size / sizeof(XGL_PHYSICAL_GPU_QUEUE_PROPERTIES));
+ queue_count = (uint32_t)(data_size / sizeof(VK_PHYSICAL_GPU_QUEUE_PROPERTIES));
assert(queue_count >= 1);
for (i = 0; i < queue_count; i++) {
- if (demo->queue_props[i].queueFlags & XGL_QUEUE_GRAPHICS_BIT)
+ if (demo->queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT)
break;
}
assert(i < queue_count);
demo->graphics_queue_node_index = i;
- err = xglGetDeviceQueue(demo->device, demo->graphics_queue_node_index,
+ err = vkGetDeviceQueue(demo->device, demo->graphics_queue_node_index,
0, &demo->queue);
assert(!err);
}
}
demo_init_connection(demo);
- demo_init_xgl(demo);
+ demo_init_vk(demo);
demo->width = 500;
demo->height = 500;
- demo->format = XGL_FMT_B8G8R8A8_UNORM;
+ demo->format = VK_FMT_B8G8R8A8_UNORM;
demo->spin_angle = 0.01f;
demo->spin_increment = 0.01f;
{
uint32_t i, j;
- xglDestroyObject(demo->desc_set);
- xglDestroyObject(demo->desc_pool);
+ vkDestroyObject(demo->desc_set);
+ vkDestroyObject(demo->desc_pool);
- xglDestroyObject(demo->viewport);
- xglDestroyObject(demo->raster);
- xglDestroyObject(demo->color_blend);
- xglDestroyObject(demo->depth_stencil);
+ vkDestroyObject(demo->viewport);
+ vkDestroyObject(demo->raster);
+ vkDestroyObject(demo->color_blend);
+ vkDestroyObject(demo->depth_stencil);
- xglDestroyObject(demo->pipeline);
- xglDestroyObject(demo->desc_layout_chain);
- xglDestroyObject(demo->desc_layout);
+ vkDestroyObject(demo->pipeline);
+ vkDestroyObject(demo->desc_layout_chain);
+ vkDestroyObject(demo->desc_layout);
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- xglDestroyObject(demo->textures[i].view);
- xglBindObjectMemory(demo->textures[i].image, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->textures[i].image);
+ vkDestroyObject(demo->textures[i].view);
+ vkBindObjectMemory(demo->textures[i].image, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->textures[i].image);
demo_remove_mem_refs(demo, demo->textures[i].num_mem, demo->textures[i].mem);
for (j = 0; j < demo->textures[i].num_mem; j++)
- xglFreeMemory(demo->textures[i].mem[j]);
- xglDestroyObject(demo->textures[i].sampler);
+ vkFreeMemory(demo->textures[i].mem[j]);
+ vkDestroyObject(demo->textures[i].sampler);
}
- xglDestroyObject(demo->depth.view);
- xglBindObjectMemory(demo->depth.image, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->depth.image);
+ vkDestroyObject(demo->depth.view);
+ vkBindObjectMemory(demo->depth.image, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->depth.image);
demo_remove_mem_refs(demo, demo->depth.num_mem, demo->depth.mem);
for (j = 0; j < demo->depth.num_mem; j++) {
- xglFreeMemory(demo->depth.mem[j]);
+ vkFreeMemory(demo->depth.mem[j]);
}
- xglDestroyObject(demo->uniform_data.view);
- xglBindObjectMemory(demo->uniform_data.buf, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->uniform_data.buf);
+ vkDestroyObject(demo->uniform_data.view);
+ vkBindObjectMemory(demo->uniform_data.buf, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->uniform_data.buf);
demo_remove_mem_refs(demo, demo->uniform_data.num_mem, demo->uniform_data.mem);
for (j = 0; j < demo->uniform_data.num_mem; j++)
- xglFreeMemory(demo->uniform_data.mem[j]);
+ vkFreeMemory(demo->uniform_data.mem[j]);
for (i = 0; i < DEMO_BUFFER_COUNT; i++) {
- xglDestroyObject(demo->buffers[i].fence);
- xglDestroyObject(demo->buffers[i].view);
- xglDestroyObject(demo->buffers[i].image);
- xglDestroyObject(demo->buffers[i].cmd);
+ vkDestroyObject(demo->buffers[i].fence);
+ vkDestroyObject(demo->buffers[i].view);
+ vkDestroyObject(demo->buffers[i].image);
+ vkDestroyObject(demo->buffers[i].cmd);
demo_remove_mem_refs(demo, 1, &demo->buffers[i].mem);
}
- xglDestroyDevice(demo->device);
- xglDestroyInstance(demo->inst);
+ vkDestroyDevice(demo->device);
+ vkDestroyInstance(demo->inst);
xcb_destroy_window(demo->connection, demo->window);
xcb_disconnect(demo->connection);
#include <assert.h>
#include <xcb/xcb.h>
-#include <xgl.h>
-#include <xglDbg.h>
-#include <xglWsiX11Ext.h>
+#include <vulkan.h>
+#include <vkDbg.h>
+#include <vkWsiX11Ext.h>
#include "icd-spv.h"
#define VERTEX_BUFFER_BIND_ID 0
struct texture_object {
- XGL_SAMPLER sampler;
+ VK_SAMPLER sampler;
- XGL_IMAGE image;
- XGL_IMAGE_LAYOUT imageLayout;
+ VK_IMAGE image;
+ VK_IMAGE_LAYOUT imageLayout;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_IMAGE_VIEW view;
+ VK_GPU_MEMORY *mem;
+ VK_IMAGE_VIEW view;
int32_t tex_width, tex_height;
};
xcb_connection_t *connection;
xcb_screen_t *screen;
- XGL_INSTANCE inst;
- XGL_PHYSICAL_GPU gpu;
- XGL_DEVICE device;
- XGL_QUEUE queue;
- XGL_PHYSICAL_GPU_PROPERTIES *gpu_props;
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
+ VK_INSTANCE inst;
+ VK_PHYSICAL_GPU gpu;
+ VK_DEVICE device;
+ VK_QUEUE queue;
+ VK_PHYSICAL_GPU_PROPERTIES *gpu_props;
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
uint32_t graphics_queue_node_index;
int width, height;
- XGL_FORMAT format;
+ VK_FORMAT format;
struct {
- XGL_IMAGE image;
- XGL_GPU_MEMORY mem;
+ VK_IMAGE image;
+ VK_GPU_MEMORY mem;
- XGL_COLOR_ATTACHMENT_VIEW view;
- XGL_FENCE fence;
+ VK_COLOR_ATTACHMENT_VIEW view;
+ VK_FENCE fence;
} buffers[DEMO_BUFFER_COUNT];
struct {
- XGL_FORMAT format;
+ VK_FORMAT format;
- XGL_IMAGE image;
+ VK_IMAGE image;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
- XGL_DEPTH_STENCIL_VIEW view;
+ VK_GPU_MEMORY *mem;
+ VK_DEPTH_STENCIL_VIEW view;
} depth;
struct texture_object textures[DEMO_TEXTURE_COUNT];
struct {
- XGL_BUFFER buf;
+ VK_BUFFER buf;
uint32_t num_mem;
- XGL_GPU_MEMORY *mem;
+ VK_GPU_MEMORY *mem;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION vi_bindings[1];
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attrs[2];
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION vi_bindings[1];
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION vi_attrs[2];
} vertices;
- XGL_CMD_BUFFER cmd; // Buffer for initialization commands
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN desc_layout_chain;
- XGL_DESCRIPTOR_SET_LAYOUT desc_layout;
- XGL_PIPELINE pipeline;
+ VK_CMD_BUFFER cmd; // Buffer for initialization commands
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN desc_layout_chain;
+ VK_DESCRIPTOR_SET_LAYOUT desc_layout;
+ VK_PIPELINE pipeline;
- XGL_DYNAMIC_VP_STATE_OBJECT viewport;
- XGL_DYNAMIC_RS_STATE_OBJECT raster;
- XGL_DYNAMIC_CB_STATE_OBJECT color_blend;
- XGL_DYNAMIC_DS_STATE_OBJECT depth_stencil;
+ VK_DYNAMIC_VP_STATE_OBJECT viewport;
+ VK_DYNAMIC_RS_STATE_OBJECT raster;
+ VK_DYNAMIC_CB_STATE_OBJECT color_blend;
+ VK_DYNAMIC_DS_STATE_OBJECT depth_stencil;
- XGL_DESCRIPTOR_POOL desc_pool;
- XGL_DESCRIPTOR_SET desc_set;
+ VK_DESCRIPTOR_POOL desc_pool;
+ VK_DESCRIPTOR_SET desc_set;
xcb_window_t window;
xcb_intern_atom_reply_t *atom_wm_delete_window;
static void demo_flush_init_cmd(struct demo *demo)
{
- XGL_RESULT err;
+ VK_RESULT err;
- if (demo->cmd == XGL_NULL_HANDLE)
+ if (demo->cmd == VK_NULL_HANDLE)
return;
- err = xglEndCommandBuffer(demo->cmd);
+ err = vkEndCommandBuffer(demo->cmd);
assert(!err);
- const XGL_CMD_BUFFER cmd_bufs[] = { demo->cmd };
+ const VK_CMD_BUFFER cmd_bufs[] = { demo->cmd };
- err = xglQueueSubmit(demo->queue, 1, cmd_bufs, XGL_NULL_HANDLE);
+ err = vkQueueSubmit(demo->queue, 1, cmd_bufs, VK_NULL_HANDLE);
assert(!err);
- err = xglQueueWaitIdle(demo->queue);
+ err = vkQueueWaitIdle(demo->queue);
assert(!err);
- xglDestroyObject(demo->cmd);
- demo->cmd = XGL_NULL_HANDLE;
+ vkDestroyObject(demo->cmd);
+ demo->cmd = VK_NULL_HANDLE;
}
static void demo_add_mem_refs(
struct demo *demo,
- int num_refs, XGL_GPU_MEMORY *mem)
+ int num_refs, VK_GPU_MEMORY *mem)
{
for (int i = 0; i < num_refs; i++) {
- xglQueueAddMemReference(demo->queue, mem[i]);
+ vkQueueAddMemReference(demo->queue, mem[i]);
}
}
static void demo_remove_mem_refs(
struct demo *demo,
- int num_refs, XGL_GPU_MEMORY *mem)
+ int num_refs, VK_GPU_MEMORY *mem)
{
for (int i = 0; i < num_refs; i++) {
- xglQueueRemoveMemReference(demo->queue, mem[i]);
+ vkQueueRemoveMemReference(demo->queue, mem[i]);
}
}
static void demo_set_image_layout(
struct demo *demo,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT old_image_layout,
- XGL_IMAGE_LAYOUT new_image_layout)
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT old_image_layout,
+ VK_IMAGE_LAYOUT new_image_layout)
{
- XGL_RESULT err;
+ VK_RESULT err;
- if (demo->cmd == XGL_NULL_HANDLE) {
- const XGL_CMD_BUFFER_CREATE_INFO cmd = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
+ if (demo->cmd == VK_NULL_HANDLE) {
+ const VK_CMD_BUFFER_CREATE_INFO cmd = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
.pNext = NULL,
.queueNodeIndex = demo->graphics_queue_node_index,
.flags = 0,
};
- err = xglCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
+ err = vkCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
assert(!err);
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
.pNext = NULL,
- .flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
+ .flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
};
- err = xglBeginCommandBuffer(demo->cmd, &cmd_buf_info);
+ err = vkBeginCommandBuffer(demo->cmd, &cmd_buf_info);
}
- XGL_IMAGE_MEMORY_BARRIER image_memory_barrier = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
+ VK_IMAGE_MEMORY_BARRIER image_memory_barrier = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
.pNext = NULL,
.outputMask = 0,
.inputMask = 0,
.oldLayout = old_image_layout,
.newLayout = new_image_layout,
.image = image,
- .subresourceRange = { XGL_IMAGE_ASPECT_COLOR, 0, 1, 0, 0 }
+ .subresourceRange = { VK_IMAGE_ASPECT_COLOR, 0, 1, 0, 0 }
};
- if (new_image_layout == XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL) {
+ if (new_image_layout == VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL) {
/* Make sure anything that was copying from this image has completed */
- image_memory_barrier.inputMask = XGL_MEMORY_INPUT_COPY_BIT;
+ image_memory_barrier.inputMask = VK_MEMORY_INPUT_COPY_BIT;
}
- if (new_image_layout == XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
+ if (new_image_layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL) {
/* Make sure any Copy or CPU writes to image are flushed */
- image_memory_barrier.outputMask = XGL_MEMORY_OUTPUT_COPY_BIT | XGL_MEMORY_OUTPUT_CPU_WRITE_BIT;
+ image_memory_barrier.outputMask = VK_MEMORY_OUTPUT_COPY_BIT | VK_MEMORY_OUTPUT_CPU_WRITE_BIT;
}
- XGL_IMAGE_MEMORY_BARRIER *pmemory_barrier = &image_memory_barrier;
+ VK_IMAGE_MEMORY_BARRIER *pmemory_barrier = &image_memory_barrier;
- XGL_PIPE_EVENT set_events[] = { XGL_PIPE_EVENT_TOP_OF_PIPE };
+ VK_PIPE_EVENT set_events[] = { VK_PIPE_EVENT_TOP_OF_PIPE };
- XGL_PIPELINE_BARRIER pipeline_barrier;
- pipeline_barrier.sType = XGL_STRUCTURE_TYPE_PIPELINE_BARRIER;
+ VK_PIPELINE_BARRIER pipeline_barrier;
+ pipeline_barrier.sType = VK_STRUCTURE_TYPE_PIPELINE_BARRIER;
pipeline_barrier.pNext = NULL;
pipeline_barrier.eventCount = 1;
pipeline_barrier.pEvents = set_events;
- pipeline_barrier.waitEvent = XGL_WAIT_EVENT_TOP_OF_PIPE;
+ pipeline_barrier.waitEvent = VK_WAIT_EVENT_TOP_OF_PIPE;
pipeline_barrier.memBarrierCount = 1;
pipeline_barrier.ppMemBarriers = (const void **)&pmemory_barrier;
- xglCmdPipelineBarrier(demo->cmd, &pipeline_barrier);
+ vkCmdPipelineBarrier(demo->cmd, &pipeline_barrier);
}
static void demo_draw_build_cmd(struct demo *demo)
{
- const XGL_COLOR_ATTACHMENT_BIND_INFO color_attachment = {
+ const VK_COLOR_ATTACHMENT_BIND_INFO color_attachment = {
.view = demo->buffers[demo->current_buffer].view,
- .layout = XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
+ .layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL,
};
- const XGL_DEPTH_STENCIL_BIND_INFO depth_stencil = {
+ const VK_DEPTH_STENCIL_BIND_INFO depth_stencil = {
.view = demo->depth.view,
- .layout = XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
+ .layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL,
};
- const XGL_CLEAR_COLOR clear_color = {
+ const VK_CLEAR_COLOR clear_color = {
.color.floatColor = { 0.2f, 0.2f, 0.2f, 0.2f },
.useRawValue = false,
};
const float clear_depth = 0.9f;
- XGL_IMAGE_SUBRESOURCE_RANGE clear_range;
- XGL_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
+ VK_IMAGE_SUBRESOURCE_RANGE clear_range;
+ VK_CMD_BUFFER_BEGIN_INFO cmd_buf_info = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO,
.pNext = NULL,
- .flags = XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
+ .flags = VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT |
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT,
};
- XGL_RESULT err;
- XGL_ATTACHMENT_LOAD_OP load_op = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
- XGL_ATTACHMENT_STORE_OP store_op = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- const XGL_FRAMEBUFFER_CREATE_INFO fb_info = {
- .sType = XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
+ VK_RESULT err;
+ VK_ATTACHMENT_LOAD_OP load_op = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
+ VK_ATTACHMENT_STORE_OP store_op = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ const VK_FRAMEBUFFER_CREATE_INFO fb_info = {
+ .sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,
.pNext = NULL,
.colorAttachmentCount = 1,
- .pColorAttachments = (XGL_COLOR_ATTACHMENT_BIND_INFO*) &color_attachment,
- .pDepthStencilAttachment = (XGL_DEPTH_STENCIL_BIND_INFO*) &depth_stencil,
+ .pColorAttachments = (VK_COLOR_ATTACHMENT_BIND_INFO*) &color_attachment,
+ .pDepthStencilAttachment = (VK_DEPTH_STENCIL_BIND_INFO*) &depth_stencil,
.sampleCount = 1,
.width = demo->width,
.height = demo->height,
.layers = 1,
};
- XGL_RENDER_PASS_CREATE_INFO rp_info;
- XGL_RENDER_PASS_BEGIN rp_begin;
+ VK_RENDER_PASS_CREATE_INFO rp_info;
+ VK_RENDER_PASS_BEGIN rp_begin;
memset(&rp_info, 0 , sizeof(rp_info));
- err = xglCreateFramebuffer(demo->device, &fb_info, &rp_begin.framebuffer);
+ err = vkCreateFramebuffer(demo->device, &fb_info, &rp_begin.framebuffer);
assert(!err);
- rp_info.sType = XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
+ rp_info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
rp_info.renderArea.extent.width = demo->width;
rp_info.renderArea.extent.height = demo->height;
rp_info.colorAttachmentCount = fb_info.colorAttachmentCount;
rp_info.pColorLoadOps = &load_op;
rp_info.pColorStoreOps = &store_op;
rp_info.pColorLoadClearValues = &clear_color;
- rp_info.depthStencilFormat = XGL_FMT_D16_UNORM;
+ rp_info.depthStencilFormat = VK_FMT_D16_UNORM;
rp_info.depthStencilLayout = depth_stencil.layout;
- rp_info.depthLoadOp = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
+ rp_info.depthLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
rp_info.depthLoadClearValue = clear_depth;
- rp_info.depthStoreOp = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- rp_info.stencilLoadOp = XGL_ATTACHMENT_LOAD_OP_DONT_CARE;
+ rp_info.depthStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ rp_info.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
rp_info.stencilLoadClearValue = 0;
- rp_info.stencilStoreOp = XGL_ATTACHMENT_STORE_OP_DONT_CARE;
- err = xglCreateRenderPass(demo->device, &rp_info, &(rp_begin.renderPass));
+ rp_info.stencilStoreOp = VK_ATTACHMENT_STORE_OP_DONT_CARE;
+ err = vkCreateRenderPass(demo->device, &rp_info, &(rp_begin.renderPass));
assert(!err);
- err = xglBeginCommandBuffer(demo->cmd, &cmd_buf_info);
+ err = vkBeginCommandBuffer(demo->cmd, &cmd_buf_info);
assert(!err);
- xglCmdBindPipeline(demo->cmd, XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindPipeline(demo->cmd, VK_PIPELINE_BIND_POINT_GRAPHICS,
demo->pipeline);
- xglCmdBindDescriptorSets(demo->cmd, XGL_PIPELINE_BIND_POINT_GRAPHICS,
+ vkCmdBindDescriptorSets(demo->cmd, VK_PIPELINE_BIND_POINT_GRAPHICS,
demo->desc_layout_chain, 0, 1, & demo->desc_set, NULL);
- xglCmdBindDynamicStateObject(demo->cmd, XGL_STATE_BIND_VIEWPORT, demo->viewport);
- xglCmdBindDynamicStateObject(demo->cmd, XGL_STATE_BIND_RASTER, demo->raster);
- xglCmdBindDynamicStateObject(demo->cmd, XGL_STATE_BIND_COLOR_BLEND,
+ vkCmdBindDynamicStateObject(demo->cmd, VK_STATE_BIND_VIEWPORT, demo->viewport);
+ vkCmdBindDynamicStateObject(demo->cmd, VK_STATE_BIND_RASTER, demo->raster);
+ vkCmdBindDynamicStateObject(demo->cmd, VK_STATE_BIND_COLOR_BLEND,
demo->color_blend);
- xglCmdBindDynamicStateObject(demo->cmd, XGL_STATE_BIND_DEPTH_STENCIL,
+ vkCmdBindDynamicStateObject(demo->cmd, VK_STATE_BIND_DEPTH_STENCIL,
demo->depth_stencil);
- xglCmdBindVertexBuffer(demo->cmd, demo->vertices.buf, 0, VERTEX_BUFFER_BIND_ID);
+ vkCmdBindVertexBuffer(demo->cmd, demo->vertices.buf, 0, VERTEX_BUFFER_BIND_ID);
- xglCmdBeginRenderPass(demo->cmd, &rp_begin);
- clear_range.aspect = XGL_IMAGE_ASPECT_COLOR;
+ vkCmdBeginRenderPass(demo->cmd, &rp_begin);
+ clear_range.aspect = VK_IMAGE_ASPECT_COLOR;
clear_range.baseMipLevel = 0;
clear_range.mipLevels = 1;
clear_range.baseArraySlice = 0;
clear_range.arraySize = 1;
- xglCmdClearColorImage(demo->cmd,
+ vkCmdClearColorImage(demo->cmd,
demo->buffers[demo->current_buffer].image,
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_color, 1, &clear_range);
- clear_range.aspect = XGL_IMAGE_ASPECT_DEPTH;
- xglCmdClearDepthStencil(demo->cmd,
- demo->depth.image, XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL,
+ clear_range.aspect = VK_IMAGE_ASPECT_DEPTH;
+ vkCmdClearDepthStencil(demo->cmd,
+ demo->depth.image, VK_IMAGE_LAYOUT_CLEAR_OPTIMAL,
clear_depth, 0, 1, &clear_range);
- xglCmdDraw(demo->cmd, 0, 3, 0, 1);
- xglCmdEndRenderPass(demo->cmd, rp_begin.renderPass);
+ vkCmdDraw(demo->cmd, 0, 3, 0, 1);
+ vkCmdEndRenderPass(demo->cmd, rp_begin.renderPass);
- err = xglEndCommandBuffer(demo->cmd);
+ err = vkEndCommandBuffer(demo->cmd);
assert(!err);
- xglDestroyObject(rp_begin.renderPass);
- xglDestroyObject(rp_begin.framebuffer);
+ vkDestroyObject(rp_begin.renderPass);
+ vkDestroyObject(rp_begin.framebuffer);
}
static void demo_draw(struct demo *demo)
{
- const XGL_WSI_X11_PRESENT_INFO present = {
+ const VK_WSI_X11_PRESENT_INFO present = {
.destWindow = demo->window,
.srcImage = demo->buffers[demo->current_buffer].image,
};
- XGL_FENCE fence = demo->buffers[demo->current_buffer].fence;
- XGL_RESULT err;
+ VK_FENCE fence = demo->buffers[demo->current_buffer].fence;
+ VK_RESULT err;
demo_draw_build_cmd(demo);
- err = xglWaitForFences(demo->device, 1, &fence, XGL_TRUE, ~((uint64_t) 0));
- assert(err == XGL_SUCCESS || err == XGL_ERROR_UNAVAILABLE);
+ err = vkWaitForFences(demo->device, 1, &fence, VK_TRUE, ~((uint64_t) 0));
+ assert(err == VK_SUCCESS || err == VK_ERROR_UNAVAILABLE);
- err = xglQueueSubmit(demo->queue, 1, &demo->cmd, XGL_NULL_HANDLE);
+ err = vkQueueSubmit(demo->queue, 1, &demo->cmd, VK_NULL_HANDLE);
assert(!err);
- err = xglWsiX11QueuePresent(demo->queue, &present, fence);
+ err = vkWsiX11QueuePresent(demo->queue, &present, fence);
assert(!err);
demo->current_buffer = (demo->current_buffer + 1) % DEMO_BUFFER_COUNT;
static void demo_prepare_buffers(struct demo *demo)
{
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image = {
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO presentable_image = {
.format = demo->format,
- .usage = XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
+ .usage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT,
.extent = {
.width = demo->width,
.height = demo->height,
},
.flags = 0,
};
- const XGL_FENCE_CREATE_INFO fence = {
- .sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO,
+ const VK_FENCE_CREATE_INFO fence = {
+ .sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,
.pNext = NULL,
.flags = 0,
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
for (i = 0; i < DEMO_BUFFER_COUNT; i++) {
- XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view = {
- .sType = XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
+ VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view = {
+ .sType = VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO,
.pNext = NULL,
.format = demo->format,
.mipLevel = 0,
.arraySize = 1,
};
- err = xglWsiX11CreatePresentableImage(demo->device, &presentable_image,
+ err = vkWsiX11CreatePresentableImage(demo->device, &presentable_image,
&demo->buffers[i].image, &demo->buffers[i].mem);
assert(!err);
-
demo_add_mem_refs(demo, 1, &demo->buffers[i].mem);
demo_set_image_layout(demo, demo->buffers[i].image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL);
color_attachment_view.image = demo->buffers[i].image;
- err = xglCreateColorAttachmentView(demo->device,
+ err = vkCreateColorAttachmentView(demo->device,
&color_attachment_view, &demo->buffers[i].view);
assert(!err);
- err = xglCreateFence(demo->device,
+ err = vkCreateFence(demo->device,
&fence, &demo->buffers[i].fence);
assert(!err);
}
static void demo_prepare_depth(struct demo *demo)
{
- const XGL_FORMAT depth_format = XGL_FMT_D16_UNORM;
- const XGL_IMAGE_CREATE_INFO image = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ const VK_FORMAT depth_format = VK_FMT_D16_UNORM;
+ const VK_IMAGE_CREATE_INFO image = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
.pNext = NULL,
- .imageType = XGL_IMAGE_2D,
+ .imageType = VK_IMAGE_2D,
.format = depth_format,
.extent = { demo->width, demo->height, 1 },
.mipLevels = 1,
.arraySize = 1,
.samples = 1,
- .tiling = XGL_OPTIMAL_TILING,
- .usage = XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT,
+ .tiling = VK_OPTIMAL_TILING,
+ .usage = VK_IMAGE_USAGE_DEPTH_STENCIL_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &img_alloc,
.allocationSize = 0,
- .memProps = XGL_MEMORY_PROPERTY_GPU_ONLY,
- .memType = XGL_MEMORY_TYPE_IMAGE,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memProps = VK_MEMORY_PROPERTY_GPU_ONLY,
+ .memType = VK_MEMORY_TYPE_IMAGE,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_DEPTH_STENCIL_VIEW_CREATE_INFO view = {
- .sType = XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO,
+ VK_DEPTH_STENCIL_VIEW_CREATE_INFO view = {
+ .sType = VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO,
.pNext = NULL,
- .image = XGL_NULL_HANDLE,
+ .image = VK_NULL_HANDLE,
.mipLevel = 0,
.baseArraySlice = 0,
.arraySize = 1,
.flags = 0,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
- XGL_RESULT err;
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
+ VK_RESULT err;
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
demo->depth.format = depth_format;
/* create image */
- err = xglCreateImage(demo->device, &image,
+ err = vkCreateImage(demo->device, &image,
&demo->depth.image);
assert(!err);
- err = xglGetObjectInfo(demo->depth.image, XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT, &num_alloc_size, &num_allocations);
+ err = vkGetObjectInfo(demo->depth.image, VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT, &num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- demo->depth.mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ demo->depth.mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
demo->depth.num_mem = num_allocations;
- err = xglGetObjectInfo(demo->depth.image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->depth.image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
- assert(!err && mem_reqs_size == num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(demo->depth.image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ assert(!err && mem_reqs_size == num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(demo->depth.image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- assert(!err && img_reqs_size == sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ assert(!err && img_reqs_size == sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
mem_alloc.allocationSize = mem_reqs[i].size;
/* allocate memory */
- err = xglAllocMemory(demo->device, &mem_alloc,
+ err = vkAllocMemory(demo->device, &mem_alloc,
&(demo->depth.mem[i]));
assert(!err);
/* bind memory */
- err = xglBindObjectMemory(demo->depth.image, i,
+ err = vkBindObjectMemory(demo->depth.image, i,
demo->depth.mem[i], 0);
assert(!err);
}
demo_set_image_layout(demo, demo->depth.image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
+ VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL);
demo_add_mem_refs(demo, demo->depth.num_mem, demo->depth.mem);
/* create image view */
view.image = demo->depth.image;
- err = xglCreateDepthStencilView(demo->device, &view,
+ err = vkCreateDepthStencilView(demo->device, &view,
&demo->depth.view);
assert(!err);
}
static void demo_prepare_texture_image(struct demo *demo,
const uint32_t *tex_colors,
struct texture_object *tex_obj,
- XGL_IMAGE_TILING tiling,
- XGL_FLAGS mem_props)
+ VK_IMAGE_TILING tiling,
+ VK_FLAGS mem_props)
{
- const XGL_FORMAT tex_format = XGL_FMT_B8G8R8A8_UNORM;
+ const VK_FORMAT tex_format = VK_FMT_B8G8R8A8_UNORM;
const int32_t tex_width = 2;
const int32_t tex_height = 2;
- XGL_RESULT err;
+ VK_RESULT err;
tex_obj->tex_width = tex_width;
tex_obj->tex_height = tex_height;
- const XGL_IMAGE_CREATE_INFO image_create_info = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
+ const VK_IMAGE_CREATE_INFO image_create_info = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
.pNext = NULL,
- .imageType = XGL_IMAGE_2D,
+ .imageType = VK_IMAGE_2D,
.format = tex_format,
.extent = { tex_width, tex_height, 1 },
.mipLevels = 1,
.arraySize = 1,
.samples = 1,
.tiling = tiling,
- .usage = XGL_IMAGE_USAGE_TRANSFER_SOURCE_BIT,
+ .usage = VK_IMAGE_USAGE_TRANSFER_SOURCE_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
+ VK_MEMORY_ALLOC_IMAGE_INFO img_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &img_alloc,
.allocationSize = 0,
.memProps = mem_props,
- .memType = XGL_MEMORY_TYPE_IMAGE,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memType = VK_MEMORY_TYPE_IMAGE,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_IMAGE_MEMORY_REQUIREMENTS img_reqs;
- size_t img_reqs_size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_IMAGE_MEMORY_REQUIREMENTS img_reqs;
+ size_t img_reqs_size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
- err = xglCreateImage(demo->device, &image_create_info,
+ err = vkCreateImage(demo->device, &image_create_info,
&tex_obj->image);
assert(!err);
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- tex_obj->mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ tex_obj->mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
- assert(!err && mem_reqs_size == num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- err = xglGetObjectInfo(tex_obj->image,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ assert(!err && mem_reqs_size == num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ err = vkGetObjectInfo(tex_obj->image,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
&img_reqs_size, &img_reqs);
- assert(!err && img_reqs_size == sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS));
+ assert(!err && img_reqs_size == sizeof(VK_IMAGE_MEMORY_REQUIREMENTS));
img_alloc.usage = img_reqs.usage;
img_alloc.formatClass = img_reqs.formatClass;
img_alloc.samples = img_reqs.samples;
- mem_alloc.memProps = XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
+ mem_alloc.memProps = VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT;
for (uint32_t j = 0; j < num_allocations; j ++) {
mem_alloc.allocationSize = mem_reqs[j].size;
mem_alloc.memType = mem_reqs[j].memType;
/* allocate memory */
- err = xglAllocMemory(demo->device, &mem_alloc,
+ err = vkAllocMemory(demo->device, &mem_alloc,
&(tex_obj->mem[j]));
assert(!err);
/* bind memory */
- err = xglBindObjectMemory(tex_obj->image, j, tex_obj->mem[j], 0);
+ err = vkBindObjectMemory(tex_obj->image, j, tex_obj->mem[j], 0);
assert(!err);
}
free(mem_reqs);
tex_obj->num_mem = num_allocations;
- if (mem_props & XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT) {
- const XGL_IMAGE_SUBRESOURCE subres = {
- .aspect = XGL_IMAGE_ASPECT_COLOR,
+ if (mem_props & VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT) {
+ const VK_IMAGE_SUBRESOURCE subres = {
+ .aspect = VK_IMAGE_ASPECT_COLOR,
.mipLevel = 0,
.arraySlice = 0,
};
- XGL_SUBRESOURCE_LAYOUT layout;
- size_t layout_size = sizeof(XGL_SUBRESOURCE_LAYOUT);
+ VK_SUBRESOURCE_LAYOUT layout;
+ size_t layout_size = sizeof(VK_SUBRESOURCE_LAYOUT);
void *data;
int32_t x, y;
- err = xglGetImageSubresourceInfo(tex_obj->image, &subres,
- XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ err = vkGetImageSubresourceInfo(tex_obj->image, &subres,
+ VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
&layout_size, &layout);
assert(!err && layout_size == sizeof(layout));
/* Linear texture must be within a single memory object */
assert(num_allocations == 1);
- err = xglMapMemory(tex_obj->mem[0], 0, &data);
+ err = vkMapMemory(tex_obj->mem[0], 0, &data);
assert(!err);
for (y = 0; y < tex_height; y++) {
row[x] = tex_colors[(x & 1) ^ (y & 1)];
}
- err = xglUnmapMemory(tex_obj->mem[0]);
+ err = vkUnmapMemory(tex_obj->mem[0]);
assert(!err);
}
- tex_obj->imageLayout = XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
+ tex_obj->imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
demo_set_image_layout(demo, tex_obj->image,
- XGL_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_UNDEFINED,
tex_obj->imageLayout);
/* setting the image layout does not reference the actual memory so no need to add a mem ref */
}
{
/* clean up staging resources */
for (uint32_t j = 0; j < tex_obj->num_mem; j ++) {
- xglBindObjectMemory(tex_obj->image, j, XGL_NULL_HANDLE, 0);
- xglFreeMemory(tex_obj->mem[j]);
+ vkBindObjectMemory(tex_obj->image, j, VK_NULL_HANDLE, 0);
+ vkFreeMemory(tex_obj->mem[j]);
}
free(tex_obj->mem);
- xglDestroyObject(tex_obj->image);
+ vkDestroyObject(tex_obj->image);
}
static void demo_prepare_textures(struct demo *demo)
{
- const XGL_FORMAT tex_format = XGL_FMT_B8G8R8A8_UNORM;
- XGL_FORMAT_PROPERTIES props;
+ const VK_FORMAT tex_format = VK_FMT_B8G8R8A8_UNORM;
+ VK_FORMAT_PROPERTIES props;
size_t size = sizeof(props);
const uint32_t tex_colors[DEMO_TEXTURE_COUNT][2] = {
{ 0xffff0000, 0xff00ff00 },
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
- err = xglGetFormatInfo(demo->device, tex_format,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(demo->device, tex_format,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &props);
assert(!err);
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- if ((props.linearTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT) && !demo->use_staging_buffer) {
+ if ((props.linearTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT) && !demo->use_staging_buffer) {
/* Device can texture using linear textures */
demo_prepare_texture_image(demo, tex_colors[i], &demo->textures[i],
- XGL_LINEAR_TILING, XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
- } else if (props.optimalTilingFeatures & XGL_FORMAT_IMAGE_SHADER_READ_BIT){
+ VK_LINEAR_TILING, VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
+ } else if (props.optimalTilingFeatures & VK_FORMAT_IMAGE_SHADER_READ_BIT){
/* Must use staging buffer to copy linear texture to optimized */
struct texture_object staging_texture;
memset(&staging_texture, 0, sizeof(staging_texture));
demo_prepare_texture_image(demo, tex_colors[i], &staging_texture,
- XGL_LINEAR_TILING, XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
+ VK_LINEAR_TILING, VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT);
demo_prepare_texture_image(demo, tex_colors[i], &demo->textures[i],
- XGL_OPTIMAL_TILING, XGL_MEMORY_PROPERTY_GPU_ONLY);
+ VK_OPTIMAL_TILING, VK_MEMORY_PROPERTY_GPU_ONLY);
demo_set_image_layout(demo, staging_texture.image,
staging_texture.imageLayout,
- XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL);
demo_set_image_layout(demo, demo->textures[i].image,
demo->textures[i].imageLayout,
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL);
- XGL_IMAGE_COPY copy_region = {
- .srcSubresource = { XGL_IMAGE_ASPECT_COLOR, 0, 0 },
+ VK_IMAGE_COPY copy_region = {
+ .srcSubresource = { VK_IMAGE_ASPECT_COLOR, 0, 0 },
.srcOffset = { 0, 0, 0 },
- .destSubresource = { XGL_IMAGE_ASPECT_COLOR, 0, 0 },
+ .destSubresource = { VK_IMAGE_ASPECT_COLOR, 0, 0 },
.destOffset = { 0, 0, 0 },
.extent = { staging_texture.tex_width, staging_texture.tex_height, 1 },
};
- xglCmdCopyImage(demo->cmd,
- staging_texture.image, XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
- demo->textures[i].image, XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ vkCmdCopyImage(demo->cmd,
+ staging_texture.image, VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL,
+ demo->textures[i].image, VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
1, ©_region);
demo_add_mem_refs(demo, staging_texture.num_mem, staging_texture.mem);
demo_add_mem_refs(demo, demo->textures[i].num_mem, demo->textures[i].mem);
demo_set_image_layout(demo, demo->textures[i].image,
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
demo->textures[i].imageLayout);
demo_flush_init_cmd(demo);
demo_destroy_texture_image(&staging_texture);
demo_remove_mem_refs(demo, staging_texture.num_mem, staging_texture.mem);
} else {
- /* Can't support XGL_FMT_B8G8R8A8_UNORM !? */
+ /* Can't support VK_FMT_B8G8R8A8_UNORM !? */
assert(!"No support for B8G8R8A8_UNORM as texture image format");
}
- const XGL_SAMPLER_CREATE_INFO sampler = {
- .sType = XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
+ const VK_SAMPLER_CREATE_INFO sampler = {
+ .sType = VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO,
.pNext = NULL,
- .magFilter = XGL_TEX_FILTER_NEAREST,
- .minFilter = XGL_TEX_FILTER_NEAREST,
- .mipMode = XGL_TEX_MIPMAP_BASE,
- .addressU = XGL_TEX_ADDRESS_WRAP,
- .addressV = XGL_TEX_ADDRESS_WRAP,
- .addressW = XGL_TEX_ADDRESS_WRAP,
+ .magFilter = VK_TEX_FILTER_NEAREST,
+ .minFilter = VK_TEX_FILTER_NEAREST,
+ .mipMode = VK_TEX_MIPMAP_BASE,
+ .addressU = VK_TEX_ADDRESS_WRAP,
+ .addressV = VK_TEX_ADDRESS_WRAP,
+ .addressW = VK_TEX_ADDRESS_WRAP,
.mipLodBias = 0.0f,
.maxAnisotropy = 1,
- .compareFunc = XGL_COMPARE_NEVER,
+ .compareFunc = VK_COMPARE_NEVER,
.minLod = 0.0f,
.maxLod = 0.0f,
- .borderColorType = XGL_BORDER_COLOR_OPAQUE_WHITE,
+ .borderColorType = VK_BORDER_COLOR_OPAQUE_WHITE,
};
- XGL_IMAGE_VIEW_CREATE_INFO view = {
- .sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
+ VK_IMAGE_VIEW_CREATE_INFO view = {
+ .sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,
.pNext = NULL,
- .image = XGL_NULL_HANDLE,
- .viewType = XGL_IMAGE_VIEW_2D,
+ .image = VK_NULL_HANDLE,
+ .viewType = VK_IMAGE_VIEW_2D,
.format = tex_format,
- .channels = { XGL_CHANNEL_SWIZZLE_R,
- XGL_CHANNEL_SWIZZLE_G,
- XGL_CHANNEL_SWIZZLE_B,
- XGL_CHANNEL_SWIZZLE_A, },
- .subresourceRange = { XGL_IMAGE_ASPECT_COLOR, 0, 1, 0, 1 },
+ .channels = { VK_CHANNEL_SWIZZLE_R,
+ VK_CHANNEL_SWIZZLE_G,
+ VK_CHANNEL_SWIZZLE_B,
+ VK_CHANNEL_SWIZZLE_A, },
+ .subresourceRange = { VK_IMAGE_ASPECT_COLOR, 0, 1, 0, 1 },
.minLod = 0.0f,
};
/* create sampler */
- err = xglCreateSampler(demo->device, &sampler,
+ err = vkCreateSampler(demo->device, &sampler,
&demo->textures[i].sampler);
assert(!err);
/* create image view */
view.image = demo->textures[i].image;
- err = xglCreateImageView(demo->device, &view,
+ err = vkCreateImageView(demo->device, &view,
&demo->textures[i].view);
assert(!err);
}
{ 1.0f, -1.0f, -0.5f, 1.0f, 0.0f },
{ 0.0f, 1.0f, 1.0f, 0.5f, 1.0f },
};
- const XGL_BUFFER_CREATE_INFO buf_info = {
- .sType = XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO,
+ const VK_BUFFER_CREATE_INFO buf_info = {
+ .sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO,
.pNext = NULL,
.size = sizeof(vb),
- .usage = XGL_BUFFER_USAGE_VERTEX_FETCH_BIT,
+ .usage = VK_BUFFER_USAGE_VERTEX_FETCH_BIT,
.flags = 0,
};
- XGL_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
+ VK_MEMORY_ALLOC_BUFFER_INFO buf_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO,
.pNext = NULL,
};
- XGL_MEMORY_ALLOC_INFO mem_alloc = {
- .sType = XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
+ VK_MEMORY_ALLOC_INFO mem_alloc = {
+ .sType = VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO,
.pNext = &buf_alloc,
.allocationSize = 0,
- .memProps = XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT,
- .memType = XGL_MEMORY_TYPE_BUFFER,
- .memPriority = XGL_MEMORY_PRIORITY_NORMAL,
+ .memProps = VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT,
+ .memType = VK_MEMORY_TYPE_BUFFER,
+ .memPriority = VK_MEMORY_PRIORITY_NORMAL,
};
- XGL_MEMORY_REQUIREMENTS *mem_reqs;
- size_t mem_reqs_size = sizeof(XGL_MEMORY_REQUIREMENTS);
- XGL_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
- size_t buf_reqs_size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_reqs;
+ size_t mem_reqs_size = sizeof(VK_MEMORY_REQUIREMENTS);
+ VK_BUFFER_MEMORY_REQUIREMENTS buf_reqs;
+ size_t buf_reqs_size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
uint32_t num_allocations = 0;
size_t num_alloc_size = sizeof(num_allocations);
- XGL_RESULT err;
+ VK_RESULT err;
void *data;
memset(&demo->vertices, 0, sizeof(demo->vertices));
- err = xglCreateBuffer(demo->device, &buf_info, &demo->vertices.buf);
+ err = vkCreateBuffer(demo->device, &buf_info, &demo->vertices.buf);
assert(!err);
- err = xglGetObjectInfo(demo->vertices.buf,
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ err = vkGetObjectInfo(demo->vertices.buf,
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
&num_alloc_size, &num_allocations);
assert(!err && num_alloc_size == sizeof(num_allocations));
- mem_reqs = malloc(num_allocations * sizeof(XGL_MEMORY_REQUIREMENTS));
- demo->vertices.mem = malloc(num_allocations * sizeof(XGL_GPU_MEMORY));
+ mem_reqs = malloc(num_allocations * sizeof(VK_MEMORY_REQUIREMENTS));
+ demo->vertices.mem = malloc(num_allocations * sizeof(VK_GPU_MEMORY));
demo->vertices.num_mem = num_allocations;
- err = xglGetObjectInfo(demo->vertices.buf,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->vertices.buf,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS,
&mem_reqs_size, mem_reqs);
assert(!err && mem_reqs_size == sizeof(*mem_reqs));
- err = xglGetObjectInfo(demo->vertices.buf,
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
+ err = vkGetObjectInfo(demo->vertices.buf,
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS,
&buf_reqs_size, &buf_reqs);
- assert(!err && buf_reqs_size == sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS));
+ assert(!err && buf_reqs_size == sizeof(VK_BUFFER_MEMORY_REQUIREMENTS));
buf_alloc.usage = buf_reqs.usage;
for (uint32_t i = 0; i < num_allocations; i ++) {
mem_alloc.allocationSize = mem_reqs[i].size;
- err = xglAllocMemory(demo->device, &mem_alloc, &demo->vertices.mem[i]);
+ err = vkAllocMemory(demo->device, &mem_alloc, &demo->vertices.mem[i]);
assert(!err);
- err = xglMapMemory(demo->vertices.mem[i], 0, &data);
+ err = vkMapMemory(demo->vertices.mem[i], 0, &data);
assert(!err);
memcpy(data, vb, sizeof(vb));
- err = xglUnmapMemory(demo->vertices.mem[i]);
+ err = vkUnmapMemory(demo->vertices.mem[i]);
assert(!err);
- err = xglBindObjectMemory(demo->vertices.buf, i, demo->vertices.mem[i], 0);
+ err = vkBindObjectMemory(demo->vertices.buf, i, demo->vertices.mem[i], 0);
assert(!err);
}
demo_add_mem_refs(demo, demo->vertices.num_mem, demo->vertices.mem);
- demo->vertices.vi.sType = XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO;
+ demo->vertices.vi.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO;
demo->vertices.vi.pNext = NULL;
demo->vertices.vi.bindingCount = 1;
demo->vertices.vi.pVertexBindingDescriptions = demo->vertices.vi_bindings;
demo->vertices.vi_bindings[0].binding = VERTEX_BUFFER_BIND_ID;
demo->vertices.vi_bindings[0].strideInBytes = sizeof(vb[0]);
- demo->vertices.vi_bindings[0].stepRate = XGL_VERTEX_INPUT_STEP_RATE_VERTEX;
+ demo->vertices.vi_bindings[0].stepRate = VK_VERTEX_INPUT_STEP_RATE_VERTEX;
demo->vertices.vi_attrs[0].binding = VERTEX_BUFFER_BIND_ID;
demo->vertices.vi_attrs[0].location = 0;
- demo->vertices.vi_attrs[0].format = XGL_FMT_R32G32B32_SFLOAT;
+ demo->vertices.vi_attrs[0].format = VK_FMT_R32G32B32_SFLOAT;
demo->vertices.vi_attrs[0].offsetInBytes = 0;
demo->vertices.vi_attrs[1].binding = VERTEX_BUFFER_BIND_ID;
demo->vertices.vi_attrs[1].location = 1;
- demo->vertices.vi_attrs[1].format = XGL_FMT_R32G32_SFLOAT;
+ demo->vertices.vi_attrs[1].format = VK_FMT_R32G32_SFLOAT;
demo->vertices.vi_attrs[1].offsetInBytes = sizeof(float) * 3;
}
static void demo_prepare_descriptor_layout(struct demo *demo)
{
- const XGL_DESCRIPTOR_SET_LAYOUT_BINDING layout_binding = {
- .descriptorType = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ const VK_DESCRIPTOR_SET_LAYOUT_BINDING layout_binding = {
+ .descriptorType = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
.count = DEMO_TEXTURE_COUNT,
- .stageFlags = XGL_SHADER_STAGE_FLAGS_FRAGMENT_BIT,
+ .stageFlags = VK_SHADER_STAGE_FLAGS_FRAGMENT_BIT,
.pImmutableSamplers = NULL,
};
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO descriptor_layout = {
- .sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO descriptor_layout = {
+ .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO,
.pNext = NULL,
.count = 1,
.pBinding = &layout_binding,
};
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateDescriptorSetLayout(demo->device,
+ err = vkCreateDescriptorSetLayout(demo->device,
&descriptor_layout, &demo->desc_layout);
assert(!err);
- err = xglCreateDescriptorSetLayoutChain(demo->device,
+ err = vkCreateDescriptorSetLayoutChain(demo->device,
1, &demo->desc_layout, &demo->desc_layout_chain);
assert(!err);
}
-static XGL_SHADER demo_prepare_shader(struct demo *demo,
- XGL_PIPELINE_SHADER_STAGE stage,
+static VK_SHADER demo_prepare_shader(struct demo *demo,
+ VK_PIPELINE_SHADER_STAGE stage,
const void *code,
size_t size)
{
- XGL_SHADER_CREATE_INFO createInfo;
- XGL_SHADER shader;
- XGL_RESULT err;
+ VK_SHADER_CREATE_INFO createInfo;
+ VK_SHADER shader;
+ VK_RESULT err;
- createInfo.sType = XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO;
+ createInfo.sType = VK_STRUCTURE_TYPE_SHADER_CREATE_INFO;
createInfo.pNext = NULL;
// Create fake SPV structure to feed GLSL
createInfo.pCode = malloc(createInfo.codeSize);
createInfo.flags = 0;
- /* try version 0 first: XGL_PIPELINE_SHADER_STAGE followed by GLSL */
+ /* try version 0 first: VK_PIPELINE_SHADER_STAGE followed by GLSL */
((uint32_t *) createInfo.pCode)[0] = ICD_SPV_MAGIC;
((uint32_t *) createInfo.pCode)[1] = 0;
((uint32_t *) createInfo.pCode)[2] = stage;
memcpy(((uint32_t *) createInfo.pCode + 3), code, size + 1);
- err = xglCreateShader(demo->device, &createInfo, &shader);
+ err = vkCreateShader(demo->device, &createInfo, &shader);
if (err) {
free((void *) createInfo.pCode);
return NULL;
return shader;
}
-static XGL_SHADER demo_prepare_vs(struct demo *demo)
+static VK_SHADER demo_prepare_vs(struct demo *demo)
{
static const char *vertShaderText =
"#version 140\n"
" gl_Position = pos;\n"
"}\n";
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_VERTEX,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_VERTEX,
(const void *) vertShaderText,
strlen(vertShaderText));
}
-static XGL_SHADER demo_prepare_fs(struct demo *demo)
+static VK_SHADER demo_prepare_fs(struct demo *demo)
{
static const char *fragShaderText =
"#version 140\n"
" gl_FragColor = texture(tex, texcoord);\n"
"}\n";
- return demo_prepare_shader(demo, XGL_SHADER_STAGE_FRAGMENT,
+ return demo_prepare_shader(demo, VK_SHADER_STAGE_FRAGMENT,
(const void *) fragShaderText,
strlen(fragShaderText));
}
static void demo_prepare_pipeline(struct demo *demo)
{
- XGL_GRAPHICS_PIPELINE_CREATE_INFO pipeline;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
- XGL_PIPELINE_IA_STATE_CREATE_INFO ia;
- XGL_PIPELINE_RS_STATE_CREATE_INFO rs;
- XGL_PIPELINE_CB_STATE_CREATE_INFO cb;
- XGL_PIPELINE_DS_STATE_CREATE_INFO ds;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO vs;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO fs;
- XGL_PIPELINE_VP_STATE_CREATE_INFO vp;
- XGL_PIPELINE_MS_STATE_CREATE_INFO ms;
- XGL_RESULT err;
+ VK_GRAPHICS_PIPELINE_CREATE_INFO pipeline;
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO vi;
+ VK_PIPELINE_IA_STATE_CREATE_INFO ia;
+ VK_PIPELINE_RS_STATE_CREATE_INFO rs;
+ VK_PIPELINE_CB_STATE_CREATE_INFO cb;
+ VK_PIPELINE_DS_STATE_CREATE_INFO ds;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO vs;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO fs;
+ VK_PIPELINE_VP_STATE_CREATE_INFO vp;
+ VK_PIPELINE_MS_STATE_CREATE_INFO ms;
+ VK_RESULT err;
memset(&pipeline, 0, sizeof(pipeline));
- pipeline.sType = XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
+ pipeline.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO;
pipeline.pSetLayoutChain = demo->desc_layout_chain;
vi = demo->vertices.vi;
memset(&ia, 0, sizeof(ia));
- ia.sType = XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
- ia.topology = XGL_TOPOLOGY_TRIANGLE_LIST;
+ ia.sType = VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO;
+ ia.topology = VK_TOPOLOGY_TRIANGLE_LIST;
memset(&rs, 0, sizeof(rs));
- rs.sType = XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
- rs.fillMode = XGL_FILL_SOLID;
- rs.cullMode = XGL_CULL_NONE;
- rs.frontFace = XGL_FRONT_FACE_CCW;
+ rs.sType = VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO;
+ rs.fillMode = VK_FILL_SOLID;
+ rs.cullMode = VK_CULL_NONE;
+ rs.frontFace = VK_FRONT_FACE_CCW;
memset(&cb, 0, sizeof(cb));
- cb.sType = XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
- XGL_PIPELINE_CB_ATTACHMENT_STATE att_state[1];
+ cb.sType = VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO;
+ VK_PIPELINE_CB_ATTACHMENT_STATE att_state[1];
memset(att_state, 0, sizeof(att_state));
att_state[0].format = demo->format;
att_state[0].channelWriteMask = 0xf;
- att_state[0].blendEnable = XGL_FALSE;
+ att_state[0].blendEnable = VK_FALSE;
cb.attachmentCount = 1;
cb.pAttachments = att_state;
memset(&vp, 0, sizeof(vp));
- vp.sType = XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO;
+ vp.sType = VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO;
vp.numViewports = 1;
- vp.clipOrigin = XGL_COORDINATE_ORIGIN_UPPER_LEFT;
+ vp.clipOrigin = VK_COORDINATE_ORIGIN_UPPER_LEFT;
memset(&ds, 0, sizeof(ds));
- ds.sType = XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
+ ds.sType = VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO;
ds.format = demo->depth.format;
- ds.depthTestEnable = XGL_TRUE;
- ds.depthWriteEnable = XGL_TRUE;
- ds.depthFunc = XGL_COMPARE_LESS_EQUAL;
- ds.depthBoundsEnable = XGL_FALSE;
- ds.back.stencilFailOp = XGL_STENCIL_OP_KEEP;
- ds.back.stencilPassOp = XGL_STENCIL_OP_KEEP;
- ds.back.stencilFunc = XGL_COMPARE_ALWAYS;
- ds.stencilTestEnable = XGL_FALSE;
+ ds.depthTestEnable = VK_TRUE;
+ ds.depthWriteEnable = VK_TRUE;
+ ds.depthFunc = VK_COMPARE_LESS_EQUAL;
+ ds.depthBoundsEnable = VK_FALSE;
+ ds.back.stencilFailOp = VK_STENCIL_OP_KEEP;
+ ds.back.stencilPassOp = VK_STENCIL_OP_KEEP;
+ ds.back.stencilFunc = VK_COMPARE_ALWAYS;
+ ds.stencilTestEnable = VK_FALSE;
ds.front = ds.back;
memset(&vs, 0, sizeof(vs));
- vs.sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
- vs.shader.stage = XGL_SHADER_STAGE_VERTEX;
+ vs.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ vs.shader.stage = VK_SHADER_STAGE_VERTEX;
vs.shader.shader = demo_prepare_vs(demo);
vs.shader.linkConstBufferCount = 0;
memset(&fs, 0, sizeof(fs));
- fs.sType = XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
- fs.shader.stage = XGL_SHADER_STAGE_FRAGMENT;
+ fs.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ fs.shader.stage = VK_SHADER_STAGE_FRAGMENT;
fs.shader.shader = demo_prepare_fs(demo);
memset(&ms, 0, sizeof(ms));
- ms.sType = XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
+ ms.sType = VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO;
ms.sampleMask = 1;
- ms.multisampleEnable = XGL_FALSE;
+ ms.multisampleEnable = VK_FALSE;
ms.samples = 1;
pipeline.pNext = (const void *) &vi;
ds.pNext = (const void *) &vs;
vs.pNext = (const void *) &fs;
- err = xglCreateGraphicsPipeline(demo->device, &pipeline, &demo->pipeline);
+ err = vkCreateGraphicsPipeline(demo->device, &pipeline, &demo->pipeline);
assert(!err);
- xglDestroyObject(vs.shader.shader);
- xglDestroyObject(fs.shader.shader);
+ vkDestroyObject(vs.shader.shader);
+ vkDestroyObject(fs.shader.shader);
}
static void demo_prepare_dynamic_states(struct demo *demo)
{
- XGL_DYNAMIC_VP_STATE_CREATE_INFO viewport_create;
- XGL_DYNAMIC_RS_STATE_CREATE_INFO raster;
- XGL_DYNAMIC_CB_STATE_CREATE_INFO color_blend;
- XGL_DYNAMIC_DS_STATE_CREATE_INFO depth_stencil;
- XGL_RESULT err;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO viewport_create;
+ VK_DYNAMIC_RS_STATE_CREATE_INFO raster;
+ VK_DYNAMIC_CB_STATE_CREATE_INFO color_blend;
+ VK_DYNAMIC_DS_STATE_CREATE_INFO depth_stencil;
+ VK_RESULT err;
memset(&viewport_create, 0, sizeof(viewport_create));
- viewport_create.sType = XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
+ viewport_create.sType = VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO;
viewport_create.viewportAndScissorCount = 1;
- XGL_VIEWPORT viewport;
+ VK_VIEWPORT viewport;
memset(&viewport, 0, sizeof(viewport));
viewport.height = (float) demo->height;
viewport.width = (float) demo->width;
viewport.minDepth = (float) 0.0f;
viewport.maxDepth = (float) 1.0f;
viewport_create.pViewports = &viewport;
- XGL_RECT scissor;
+ VK_RECT scissor;
memset(&scissor, 0, sizeof(scissor));
scissor.extent.width = demo->width;
scissor.extent.height = demo->height;
viewport_create.pScissors = &scissor;
memset(&raster, 0, sizeof(raster));
- raster.sType = XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
+ raster.sType = VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO;
raster.pointSize = 1.0;
raster.lineWidth = 1.0;
memset(&color_blend, 0, sizeof(color_blend));
- color_blend.sType = XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
+ color_blend.sType = VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO;
color_blend.blendConst[0] = 1.0f;
color_blend.blendConst[1] = 1.0f;
color_blend.blendConst[2] = 1.0f;
color_blend.blendConst[3] = 1.0f;
memset(&depth_stencil, 0, sizeof(depth_stencil));
- depth_stencil.sType = XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
+ depth_stencil.sType = VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO;
depth_stencil.minDepth = 0.0f;
depth_stencil.maxDepth = 1.0f;
depth_stencil.stencilBackRef = 0;
depth_stencil.stencilReadMask = 0xff;
depth_stencil.stencilWriteMask = 0xff;
- err = xglCreateDynamicViewportState(demo->device, &viewport_create, &demo->viewport);
+ err = vkCreateDynamicViewportState(demo->device, &viewport_create, &demo->viewport);
assert(!err);
- err = xglCreateDynamicRasterState(demo->device, &raster, &demo->raster);
+ err = vkCreateDynamicRasterState(demo->device, &raster, &demo->raster);
assert(!err);
- err = xglCreateDynamicColorBlendState(demo->device,
+ err = vkCreateDynamicColorBlendState(demo->device,
&color_blend, &demo->color_blend);
assert(!err);
- err = xglCreateDynamicDepthStencilState(demo->device,
+ err = vkCreateDynamicDepthStencilState(demo->device,
&depth_stencil, &demo->depth_stencil);
assert(!err);
}
static void demo_prepare_descriptor_pool(struct demo *demo)
{
- const XGL_DESCRIPTOR_TYPE_COUNT type_count = {
- .type = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
+ const VK_DESCRIPTOR_TYPE_COUNT type_count = {
+ .type = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE,
.count = DEMO_TEXTURE_COUNT,
};
- const XGL_DESCRIPTOR_POOL_CREATE_INFO descriptor_pool = {
- .sType = XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO descriptor_pool = {
+ .sType = VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO,
.pNext = NULL,
.count = 1,
.pTypeCount = &type_count,
};
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateDescriptorPool(demo->device,
- XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1,
+ err = vkCreateDescriptorPool(demo->device,
+ VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT, 1,
&descriptor_pool, &demo->desc_pool);
assert(!err);
}
static void demo_prepare_descriptor_set(struct demo *demo)
{
- XGL_IMAGE_VIEW_ATTACH_INFO view_info[DEMO_TEXTURE_COUNT];
- XGL_SAMPLER_IMAGE_VIEW_INFO combined_info[DEMO_TEXTURE_COUNT];
- XGL_UPDATE_SAMPLER_TEXTURES update;
+ VK_IMAGE_VIEW_ATTACH_INFO view_info[DEMO_TEXTURE_COUNT];
+ VK_SAMPLER_IMAGE_VIEW_INFO combined_info[DEMO_TEXTURE_COUNT];
+ VK_UPDATE_SAMPLER_TEXTURES update;
const void *update_array[1] = { &update };
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t count;
uint32_t i;
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- view_info[i].sType = XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
+ view_info[i].sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO;
view_info[i].pNext = NULL;
view_info[i].view = demo->textures[i].view,
- view_info[i].layout = XGL_IMAGE_LAYOUT_GENERAL;
+ view_info[i].layout = VK_IMAGE_LAYOUT_GENERAL;
combined_info[i].sampler = demo->textures[i].sampler;
combined_info[i].pImageView = &view_info[i];
}
memset(&update, 0, sizeof(update));
- update.sType = XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
+ update.sType = VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES;
update.count = DEMO_TEXTURE_COUNT;
update.pSamplerImageViews = combined_info;
- err = xglAllocDescriptorSets(demo->desc_pool,
- XGL_DESCRIPTOR_SET_USAGE_STATIC,
+ err = vkAllocDescriptorSets(demo->desc_pool,
+ VK_DESCRIPTOR_SET_USAGE_STATIC,
1, &demo->desc_layout,
&demo->desc_set, &count);
assert(!err && count == 1);
- xglBeginDescriptorPoolUpdate(demo->device,
- XGL_DESCRIPTOR_UPDATE_MODE_FASTEST);
+ vkBeginDescriptorPoolUpdate(demo->device,
+ VK_DESCRIPTOR_UPDATE_MODE_FASTEST);
- xglClearDescriptorSets(demo->desc_pool, 1, &demo->desc_set);
- xglUpdateDescriptors(demo->desc_set, 1, update_array);
+ vkClearDescriptorSets(demo->desc_pool, 1, &demo->desc_set);
+ vkUpdateDescriptors(demo->desc_set, 1, update_array);
- xglEndDescriptorPoolUpdate(demo->device, demo->cmd);
+ vkEndDescriptorPoolUpdate(demo->device, demo->cmd);
}
static void demo_prepare(struct demo *demo)
{
- const XGL_CMD_BUFFER_CREATE_INFO cmd = {
- .sType = XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
+ const VK_CMD_BUFFER_CREATE_INFO cmd = {
+ .sType = VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO,
.pNext = NULL,
.queueNodeIndex = demo->graphics_queue_node_index,
.flags = 0,
};
- XGL_RESULT err;
+ VK_RESULT err;
demo_prepare_buffers(demo);
demo_prepare_depth(demo);
demo_prepare_pipeline(demo);
demo_prepare_dynamic_states(demo);
- err = xglCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
+ err = vkCreateCommandBuffer(demo->device, &cmd, &demo->cmd);
assert(!err);
demo_prepare_descriptor_pool(demo);
xcb_map_window(demo->connection, demo->window);
}
-static void demo_init_xgl(struct demo *demo)
+static void demo_init_vk(struct demo *demo)
{
- const XGL_APPLICATION_INFO app = {
- .sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO,
+ const VK_APPLICATION_INFO app = {
+ .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.pNext = NULL,
.pAppName = "tri",
.appVersion = 0,
.pEngineName = "tri",
.engineVersion = 0,
- .apiVersion = XGL_API_VERSION,
+ .apiVersion = VK_API_VERSION,
};
- const XGL_INSTANCE_CREATE_INFO inst_info = {
- .sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ const VK_INSTANCE_CREATE_INFO inst_info = {
+ .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
.pNext = NULL,
.pAppInfo = &app,
.pAllocCb = NULL,
.extensionCount = 0,
.ppEnabledExtensionNames = NULL,
};
- const XGL_WSI_X11_CONNECTION_INFO connection = {
+ const VK_WSI_X11_CONNECTION_INFO connection = {
.pConnection = demo->connection,
.root = demo->screen->root,
.provider = 0,
};
- const XGL_DEVICE_QUEUE_CREATE_INFO queue = {
+ const VK_DEVICE_QUEUE_CREATE_INFO queue = {
.queueNodeIndex = 0,
.queueCount = 1,
};
const char *ext_names[] = {
- "XGL_WSI_X11",
+ "VK_WSI_X11",
};
- const XGL_DEVICE_CREATE_INFO device = {
- .sType = XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
+ const VK_DEVICE_CREATE_INFO device = {
+ .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = NULL,
.queueRecordCount = 1,
.pRequestedQueues = &queue,
.extensionCount = 1,
.ppEnabledExtensionNames = ext_names,
- .maxValidationLevel = XGL_VALIDATION_LEVEL_END_RANGE,
- .flags = XGL_DEVICE_CREATE_VALIDATION_BIT,
+ .maxValidationLevel = VK_VALIDATION_LEVEL_END_RANGE,
+ .flags = VK_DEVICE_CREATE_VALIDATION_BIT,
};
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t gpu_count;
uint32_t i;
size_t data_size;
uint32_t queue_count;
- err = xglCreateInstance(&inst_info, &demo->inst);
- if (err == XGL_ERROR_INCOMPATIBLE_DRIVER) {
+ err = vkCreateInstance(&inst_info, &demo->inst);
+ if (err == VK_ERROR_INCOMPATIBLE_DRIVER) {
printf("Cannot find a compatible Vulkan installable client driver "
"(ICD).\nExiting ...\n");
fflush(stdout);
assert(!err);
}
- err = xglEnumerateGpus(demo->inst, 1, &gpu_count, &demo->gpu);
+ err = vkEnumerateGpus(demo->inst, 1, &gpu_count, &demo->gpu);
assert(!err && gpu_count == 1);
for (i = 0; i < device.extensionCount; i++) {
- err = xglGetExtensionSupport(demo->gpu, ext_names[i]);
+ err = vkGetExtensionSupport(demo->gpu, ext_names[i]);
assert(!err);
}
- err = xglWsiX11AssociateConnection(demo->gpu, &connection);
+ err = vkWsiX11AssociateConnection(demo->gpu, &connection);
assert(!err);
- err = xglCreateDevice(demo->gpu, &device, &demo->device);
+ err = vkCreateDevice(demo->gpu, &device, &demo->device);
assert(!err);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&data_size, NULL);
assert(!err);
- demo->gpu_props = (XGL_PHYSICAL_GPU_PROPERTIES *) malloc(data_size);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ demo->gpu_props = (VK_PHYSICAL_GPU_PROPERTIES *) malloc(data_size);
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&data_size, demo->gpu_props);
assert(!err);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, NULL);
assert(!err);
- demo->queue_props = (XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *) malloc(data_size);
- err = xglGetGpuInfo(demo->gpu, XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ demo->queue_props = (VK_PHYSICAL_GPU_QUEUE_PROPERTIES *) malloc(data_size);
+ err = vkGetGpuInfo(demo->gpu, VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&data_size, demo->queue_props);
assert(!err);
- queue_count = (uint32_t) (data_size / sizeof(XGL_PHYSICAL_GPU_QUEUE_PROPERTIES));
+ queue_count = (uint32_t) (data_size / sizeof(VK_PHYSICAL_GPU_QUEUE_PROPERTIES));
assert(queue_count >= 1);
for (i = 0; i < queue_count; i++) {
- if (demo->queue_props[i].queueFlags & XGL_QUEUE_GRAPHICS_BIT)
+ if (demo->queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT)
break;
}
assert(i < queue_count);
demo->graphics_queue_node_index = i;
- err = xglGetDeviceQueue(demo->device, demo->graphics_queue_node_index,
+ err = vkGetDeviceQueue(demo->device, demo->graphics_queue_node_index,
0, &demo->queue);
assert(!err);
}
}
demo_init_connection(demo);
- demo_init_xgl(demo);
+ demo_init_vk(demo);
demo->width = 300;
demo->height = 300;
- demo->format = XGL_FMT_B8G8R8A8_UNORM;
+ demo->format = VK_FMT_B8G8R8A8_UNORM;
}
static void demo_cleanup(struct demo *demo)
{
uint32_t i, j;
- xglDestroyObject(demo->desc_set);
- xglDestroyObject(demo->desc_pool);
+ vkDestroyObject(demo->desc_set);
+ vkDestroyObject(demo->desc_pool);
- xglDestroyObject(demo->cmd);
+ vkDestroyObject(demo->cmd);
- xglDestroyObject(demo->viewport);
- xglDestroyObject(demo->raster);
- xglDestroyObject(demo->color_blend);
- xglDestroyObject(demo->depth_stencil);
+ vkDestroyObject(demo->viewport);
+ vkDestroyObject(demo->raster);
+ vkDestroyObject(demo->color_blend);
+ vkDestroyObject(demo->depth_stencil);
- xglDestroyObject(demo->pipeline);
- xglDestroyObject(demo->desc_layout_chain);
- xglDestroyObject(demo->desc_layout);
+ vkDestroyObject(demo->pipeline);
+ vkDestroyObject(demo->desc_layout_chain);
+ vkDestroyObject(demo->desc_layout);
- xglBindObjectMemory(demo->vertices.buf, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->vertices.buf);
+ vkBindObjectMemory(demo->vertices.buf, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->vertices.buf);
demo_remove_mem_refs(demo, demo->vertices.num_mem, demo->vertices.mem);
for (j = 0; j < demo->vertices.num_mem; j++)
- xglFreeMemory(demo->vertices.mem[j]);
+ vkFreeMemory(demo->vertices.mem[j]);
for (i = 0; i < DEMO_TEXTURE_COUNT; i++) {
- xglDestroyObject(demo->textures[i].view);
- xglBindObjectMemory(demo->textures[i].image, 0, XGL_NULL_HANDLE, 0);
- xglDestroyObject(demo->textures[i].image);
+ vkDestroyObject(demo->textures[i].view);
+ vkBindObjectMemory(demo->textures[i].image, 0, VK_NULL_HANDLE, 0);
+ vkDestroyObject(demo->textures[i].image);
demo_remove_mem_refs(demo, demo->textures[i].num_mem, demo->textures[i].mem);
for (j = 0; j < demo->textures[i].num_mem; j++)
- xglFreeMemory(demo->textures[i].mem[j]);
+ vkFreeMemory(demo->textures[i].mem[j]);
free(demo->textures[i].mem);
- xglDestroyObject(demo->textures[i].sampler);
+ vkDestroyObject(demo->textures[i].sampler);
}
- xglDestroyObject(demo->depth.view);
- xglBindObjectMemory(demo->depth.image, 0, XGL_NULL_HANDLE, 0);
+ vkDestroyObject(demo->depth.view);
+ vkBindObjectMemory(demo->depth.image, 0, VK_NULL_HANDLE, 0);
demo_remove_mem_refs(demo, demo->depth.num_mem, demo->depth.mem);
- xglDestroyObject(demo->depth.image);
+ vkDestroyObject(demo->depth.image);
for (j = 0; j < demo->depth.num_mem; j++)
- xglFreeMemory(demo->depth.mem[j]);
+ vkFreeMemory(demo->depth.mem[j]);
for (i = 0; i < DEMO_BUFFER_COUNT; i++) {
- xglDestroyObject(demo->buffers[i].fence);
- xglDestroyObject(demo->buffers[i].view);
- xglDestroyObject(demo->buffers[i].image);
+ vkDestroyObject(demo->buffers[i].fence);
+ vkDestroyObject(demo->buffers[i].view);
+ vkDestroyObject(demo->buffers[i].image);
demo_remove_mem_refs(demo, 1, &demo->buffers[i].mem);
}
- xglDestroyDevice(demo->device);
- xglDestroyInstance(demo->inst);
+ vkDestroyDevice(demo->device);
+ vkDestroyInstance(demo->inst);
xcb_destroy_window(demo->connection, demo->window);
xcb_disconnect(demo->connection);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <string.h>
#include <assert.h>
-#include <xgl.h>
+#include <vulkan.h>
#define ERR(err) printf("%s:%d: failed with %s\n", \
- __FILE__, __LINE__, xgl_result_string(err));
+ __FILE__, __LINE__, vk_result_string(err));
#define ERR_EXIT(err) do { ERR(err); exit(-1); } while (0)
struct app_dev {
struct app_gpu *gpu; /* point back to the GPU */
- XGL_DEVICE obj;
+ VK_DEVICE obj;
- XGL_FORMAT_PROPERTIES format_props[XGL_NUM_FMT];
+ VK_FORMAT_PROPERTIES format_props[VK_NUM_FMT];
};
struct app_gpu {
uint32_t id;
- XGL_PHYSICAL_GPU obj;
+ VK_PHYSICAL_GPU obj;
- XGL_PHYSICAL_GPU_PROPERTIES props;
- XGL_PHYSICAL_GPU_PERFORMANCE perf;
+ VK_PHYSICAL_GPU_PROPERTIES props;
+ VK_PHYSICAL_GPU_PERFORMANCE perf;
uint32_t queue_count;
- XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
- XGL_DEVICE_QUEUE_CREATE_INFO *queue_reqs;
+ VK_PHYSICAL_GPU_QUEUE_PROPERTIES *queue_props;
+ VK_DEVICE_QUEUE_CREATE_INFO *queue_reqs;
- XGL_PHYSICAL_GPU_MEMORY_PROPERTIES memory_props;
+ VK_PHYSICAL_GPU_MEMORY_PROPERTIES memory_props;
uint32_t extension_count;
char **extensions;
struct app_dev dev;
};
-static const char *xgl_result_string(XGL_RESULT err)
+static const char *vk_result_string(VK_RESULT err)
{
switch (err) {
#define STR(r) case r: return #r
- STR(XGL_SUCCESS);
- STR(XGL_UNSUPPORTED);
- STR(XGL_NOT_READY);
- STR(XGL_TIMEOUT);
- STR(XGL_EVENT_SET);
- STR(XGL_EVENT_RESET);
- STR(XGL_ERROR_UNKNOWN);
- STR(XGL_ERROR_UNAVAILABLE);
- STR(XGL_ERROR_INITIALIZATION_FAILED);
- STR(XGL_ERROR_OUT_OF_MEMORY);
- STR(XGL_ERROR_OUT_OF_GPU_MEMORY);
- STR(XGL_ERROR_DEVICE_ALREADY_CREATED);
- STR(XGL_ERROR_DEVICE_LOST);
- STR(XGL_ERROR_INVALID_POINTER);
- STR(XGL_ERROR_INVALID_VALUE);
- STR(XGL_ERROR_INVALID_HANDLE);
- STR(XGL_ERROR_INVALID_ORDINAL);
- STR(XGL_ERROR_INVALID_MEMORY_SIZE);
- STR(XGL_ERROR_INVALID_EXTENSION);
- STR(XGL_ERROR_INVALID_FLAGS);
- STR(XGL_ERROR_INVALID_ALIGNMENT);
- STR(XGL_ERROR_INVALID_FORMAT);
- STR(XGL_ERROR_INVALID_IMAGE);
- STR(XGL_ERROR_INVALID_DESCRIPTOR_SET_DATA);
- STR(XGL_ERROR_INVALID_QUEUE_TYPE);
- STR(XGL_ERROR_INVALID_OBJECT_TYPE);
- STR(XGL_ERROR_UNSUPPORTED_SHADER_IL_VERSION);
- STR(XGL_ERROR_BAD_SHADER_CODE);
- STR(XGL_ERROR_BAD_PIPELINE_DATA);
- STR(XGL_ERROR_TOO_MANY_MEMORY_REFERENCES);
- STR(XGL_ERROR_NOT_MAPPABLE);
- STR(XGL_ERROR_MEMORY_MAP_FAILED);
- STR(XGL_ERROR_MEMORY_UNMAP_FAILED);
- STR(XGL_ERROR_INCOMPATIBLE_DEVICE);
- STR(XGL_ERROR_INCOMPATIBLE_DRIVER);
- STR(XGL_ERROR_INCOMPLETE_COMMAND_BUFFER);
- STR(XGL_ERROR_BUILDING_COMMAND_BUFFER);
- STR(XGL_ERROR_MEMORY_NOT_BOUND);
- STR(XGL_ERROR_INCOMPATIBLE_QUEUE);
- STR(XGL_ERROR_NOT_SHAREABLE);
+ STR(VK_SUCCESS);
+ STR(VK_UNSUPPORTED);
+ STR(VK_NOT_READY);
+ STR(VK_TIMEOUT);
+ STR(VK_EVENT_SET);
+ STR(VK_EVENT_RESET);
+ STR(VK_ERROR_UNKNOWN);
+ STR(VK_ERROR_UNAVAILABLE);
+ STR(VK_ERROR_INITIALIZATION_FAILED);
+ STR(VK_ERROR_OUT_OF_MEMORY);
+ STR(VK_ERROR_OUT_OF_GPU_MEMORY);
+ STR(VK_ERROR_DEVICE_ALREADY_CREATED);
+ STR(VK_ERROR_DEVICE_LOST);
+ STR(VK_ERROR_INVALID_POINTER);
+ STR(VK_ERROR_INVALID_VALUE);
+ STR(VK_ERROR_INVALID_HANDLE);
+ STR(VK_ERROR_INVALID_ORDINAL);
+ STR(VK_ERROR_INVALID_MEMORY_SIZE);
+ STR(VK_ERROR_INVALID_EXTENSION);
+ STR(VK_ERROR_INVALID_FLAGS);
+ STR(VK_ERROR_INVALID_ALIGNMENT);
+ STR(VK_ERROR_INVALID_FORMAT);
+ STR(VK_ERROR_INVALID_IMAGE);
+ STR(VK_ERROR_INVALID_DESCRIPTOR_SET_DATA);
+ STR(VK_ERROR_INVALID_QUEUE_TYPE);
+ STR(VK_ERROR_INVALID_OBJECT_TYPE);
+ STR(VK_ERROR_UNSUPPORTED_SHADER_IL_VERSION);
+ STR(VK_ERROR_BAD_SHADER_CODE);
+ STR(VK_ERROR_BAD_PIPELINE_DATA);
+ STR(VK_ERROR_TOO_MANY_MEMORY_REFERENCES);
+ STR(VK_ERROR_NOT_MAPPABLE);
+ STR(VK_ERROR_MEMORY_MAP_FAILED);
+ STR(VK_ERROR_MEMORY_UNMAP_FAILED);
+ STR(VK_ERROR_INCOMPATIBLE_DEVICE);
+ STR(VK_ERROR_INCOMPATIBLE_DRIVER);
+ STR(VK_ERROR_INCOMPLETE_COMMAND_BUFFER);
+ STR(VK_ERROR_BUILDING_COMMAND_BUFFER);
+ STR(VK_ERROR_MEMORY_NOT_BOUND);
+ STR(VK_ERROR_INCOMPATIBLE_QUEUE);
+ STR(VK_ERROR_NOT_SHAREABLE);
#undef STR
default: return "UNKNOWN_RESULT";
}
}
-static const char *xgl_gpu_type_string(XGL_PHYSICAL_GPU_TYPE type)
+static const char *vk_gpu_type_string(VK_PHYSICAL_GPU_TYPE type)
{
switch (type) {
-#define STR(r) case XGL_GPU_TYPE_ ##r: return #r
+#define STR(r) case VK_GPU_TYPE_ ##r: return #r
STR(OTHER);
STR(INTEGRATED);
STR(DISCRETE);
}
}
-static const char *xgl_format_string(XGL_FORMAT fmt)
+static const char *vk_format_string(VK_FORMAT fmt)
{
switch (fmt) {
-#define STR(r) case XGL_FMT_ ##r: return #r
+#define STR(r) case VK_FMT_ ##r: return #r
STR(UNDEFINED);
STR(R4G4_UNORM);
STR(R4G4_USCALED);
static void app_dev_init_formats(struct app_dev *dev)
{
- XGL_FORMAT f;
+ VK_FORMAT f;
- for (f = 0; f < XGL_NUM_FMT; f++) {
- const XGL_FORMAT fmt = f;
- XGL_RESULT err;
+ for (f = 0; f < VK_NUM_FMT; f++) {
+ const VK_FORMAT fmt = f;
+ VK_RESULT err;
size_t size = sizeof(dev->format_props[f]);
- err = xglGetFormatInfo(dev->obj, fmt,
- XGL_INFO_TYPE_FORMAT_PROPERTIES,
+ err = vkGetFormatInfo(dev->obj, fmt,
+ VK_INFO_TYPE_FORMAT_PROPERTIES,
&size, &dev->format_props[f]);
if (err) {
memset(&dev->format_props[f], 0,
sizeof(dev->format_props[f]));
}
else if (size != sizeof(dev->format_props[f])) {
- ERR_EXIT(XGL_ERROR_UNKNOWN);
+ ERR_EXIT(VK_ERROR_UNKNOWN);
}
}
}
static void app_dev_init(struct app_dev *dev, struct app_gpu *gpu)
{
- XGL_DEVICE_CREATE_INFO info = {
- .sType = XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
+ VK_DEVICE_CREATE_INFO info = {
+ .sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = NULL,
.queueRecordCount = 0,
.pRequestedQueues = NULL,
.extensionCount = 0,
.ppEnabledExtensionNames = NULL,
- .maxValidationLevel = XGL_VALIDATION_LEVEL_END_RANGE,
- .flags = XGL_DEVICE_CREATE_VALIDATION_BIT,
+ .maxValidationLevel = VK_VALIDATION_LEVEL_END_RANGE,
+ .flags = VK_DEVICE_CREATE_VALIDATION_BIT,
};
- XGL_RESULT err;
+ VK_RESULT err;
/* request all queues */
info.queueRecordCount = gpu->queue_count;
info.extensionCount = gpu->extension_count;
info.ppEnabledExtensionNames = (const char*const*) gpu->extensions;
dev->gpu = gpu;
- err = xglCreateDevice(gpu->obj, &info, &dev->obj);
+ err = vkCreateDevice(gpu->obj, &info, &dev->obj);
if (err)
ERR_EXIT(err);
static void app_dev_destroy(struct app_dev *dev)
{
- xglDestroyDevice(dev->obj);
+ vkDestroyDevice(dev->obj);
}
static void app_gpu_init_extensions(struct app_gpu *gpu)
{
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
static char *known_extensions[] = {
- "XGL_WSI_X11",
+ "VK_WSI_X11",
};
for (i = 0; i < ARRAY_SIZE(known_extensions); i++) {
- err = xglGetExtensionSupport(gpu->obj, known_extensions[i]);
+ err = vkGetExtensionSupport(gpu->obj, known_extensions[i]);
if (!err)
gpu->extension_count++;
}
gpu->extensions =
malloc(sizeof(gpu->extensions[0]) * gpu->extension_count);
if (!gpu->extensions)
- ERR_EXIT(XGL_ERROR_OUT_OF_MEMORY);
+ ERR_EXIT(VK_ERROR_OUT_OF_MEMORY);
gpu->extension_count = 0;
for (i = 0; i < ARRAY_SIZE(known_extensions); i++) {
- err = xglGetExtensionSupport(gpu->obj, known_extensions[i]);
+ err = vkGetExtensionSupport(gpu->obj, known_extensions[i]);
if (!err)
gpu->extensions[gpu->extension_count++] = known_extensions[i];
}
}
-static void app_gpu_init(struct app_gpu *gpu, uint32_t id, XGL_PHYSICAL_GPU obj)
+static void app_gpu_init(struct app_gpu *gpu, uint32_t id, VK_PHYSICAL_GPU obj)
{
size_t size;
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i;
memset(gpu, 0, sizeof(*gpu));
gpu->id = id;
gpu->obj = obj;
size = sizeof(gpu->props);
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
&size, &gpu->props);
if (err || size != sizeof(gpu->props))
ERR_EXIT(err);
size = sizeof(gpu->perf);
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE,
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE,
&size, &gpu->perf);
if (err || size != sizeof(gpu->perf))
ERR_EXIT(err);
/* get queue count */
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&size, NULL);
if (err || size % sizeof(gpu->queue_props[0]))
ERR_EXIT(err);
malloc(sizeof(gpu->queue_props[0]) * gpu->queue_count);
size = sizeof(gpu->queue_props[0]) * gpu->queue_count;
if (!gpu->queue_props)
- ERR_EXIT(XGL_ERROR_OUT_OF_MEMORY);
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
+ ERR_EXIT(VK_ERROR_OUT_OF_MEMORY);
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES,
&size, gpu->queue_props);
if (err || size != sizeof(gpu->queue_props[0]) * gpu->queue_count)
ERR_EXIT(err);
size = sizeof(*gpu->queue_reqs) * gpu->queue_count;
gpu->queue_reqs = malloc(sizeof(*gpu->queue_reqs) * gpu->queue_count);
if (!gpu->queue_reqs)
- ERR_EXIT(XGL_ERROR_OUT_OF_MEMORY);
+ ERR_EXIT(VK_ERROR_OUT_OF_MEMORY);
for (i = 0; i < gpu->queue_count; i++) {
gpu->queue_reqs[i].queueNodeIndex = i;
gpu->queue_reqs[i].queueCount = gpu->queue_props[i].queueCount;
}
size = sizeof(gpu->memory_props);
- err = xglGetGpuInfo(gpu->obj,
- XGL_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES,
+ err = vkGetGpuInfo(gpu->obj,
+ VK_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES,
&size, &gpu->memory_props);
if (err || size != sizeof(gpu->memory_props))
ERR_EXIT(err);
free(gpu->queue_props);
}
-static void app_dev_dump_format_props(const struct app_dev *dev, XGL_FORMAT fmt)
+static void app_dev_dump_format_props(const struct app_dev *dev, VK_FORMAT fmt)
{
- const XGL_FORMAT_PROPERTIES *props = &dev->format_props[fmt];
+ const VK_FORMAT_PROPERTIES *props = &dev->format_props[fmt];
struct {
const char *name;
- XGL_FLAGS flags;
+ VK_FLAGS flags;
} tilings[2];
uint32_t i;
tilings[1].name = "optimal";
tilings[1].flags = props->optimalTilingFeatures;
- printf("FORMAT_%s\n", xgl_format_string(fmt));
+ printf("FORMAT_%s\n", vk_format_string(fmt));
for (i = 0; i < ARRAY_SIZE(tilings); i++) {
if (!tilings[i].flags)
continue;
printf("\t%s tiling image =%s%s%s\n", tilings[i].name,
- (tilings[i].flags & XGL_FORMAT_IMAGE_SHADER_READ_BIT) ? " read" : "",
- (tilings[i].flags & XGL_FORMAT_IMAGE_SHADER_WRITE_BIT) ? " write" : "",
- (tilings[i].flags & XGL_FORMAT_IMAGE_COPY_BIT) ? " copy" : "");
+ (tilings[i].flags & VK_FORMAT_IMAGE_SHADER_READ_BIT) ? " read" : "",
+ (tilings[i].flags & VK_FORMAT_IMAGE_SHADER_WRITE_BIT) ? " write" : "",
+ (tilings[i].flags & VK_FORMAT_IMAGE_COPY_BIT) ? " copy" : "");
printf("\t%s tiling memory =%s\n", tilings[i].name,
- (tilings[i].flags & XGL_FORMAT_MEMORY_SHADER_ACCESS_BIT) ? " access" : "");
+ (tilings[i].flags & VK_FORMAT_MEMORY_SHADER_ACCESS_BIT) ? " access" : "");
printf("\t%s tiling attachment =%s%s%s%s%s\n", tilings[i].name,
- (tilings[i].flags & XGL_FORMAT_COLOR_ATTACHMENT_WRITE_BIT) ? " color" : "",
- (tilings[i].flags & XGL_FORMAT_COLOR_ATTACHMENT_BLEND_BIT) ? " blend" : "",
- (tilings[i].flags & XGL_FORMAT_DEPTH_ATTACHMENT_BIT) ? " depth" : "",
- (tilings[i].flags & XGL_FORMAT_STENCIL_ATTACHMENT_BIT) ? " stencil" : "",
- (tilings[i].flags & XGL_FORMAT_MSAA_ATTACHMENT_BIT) ? " msaa" : "");
+ (tilings[i].flags & VK_FORMAT_COLOR_ATTACHMENT_WRITE_BIT) ? " color" : "",
+ (tilings[i].flags & VK_FORMAT_COLOR_ATTACHMENT_BLEND_BIT) ? " blend" : "",
+ (tilings[i].flags & VK_FORMAT_DEPTH_ATTACHMENT_BIT) ? " depth" : "",
+ (tilings[i].flags & VK_FORMAT_STENCIL_ATTACHMENT_BIT) ? " stencil" : "",
+ (tilings[i].flags & VK_FORMAT_MSAA_ATTACHMENT_BIT) ? " msaa" : "");
printf("\t%s tiling conversion = %u\n", tilings[i].name,
- (bool) (tilings[i].flags & XGL_FORMAT_CONVERSION_BIT));
+ (bool) (tilings[i].flags & VK_FORMAT_CONVERSION_BIT));
}
}
static void
app_dev_dump(const struct app_dev *dev)
{
- XGL_FORMAT fmt;
+ VK_FORMAT fmt;
- for (fmt = 0; fmt < XGL_NUM_FMT; fmt++) {
+ for (fmt = 0; fmt < VK_NUM_FMT; fmt++) {
app_dev_dump_format_props(dev, fmt);
}
}
static void app_gpu_dump_multi_compat(const struct app_gpu *gpu, const struct app_gpu *other,
- const XGL_GPU_COMPATIBILITY_INFO *info)
+ const VK_GPU_COMPATIBILITY_INFO *info)
{
- printf("XGL_GPU_COMPATIBILITY_INFO[GPU%d]\n", other->id);
+ printf("VK_GPU_COMPATIBILITY_INFO[GPU%d]\n", other->id);
-#define TEST(info, b) printf(#b " = %u\n", (bool) (info->compatibilityFlags & XGL_GPU_COMPAT_ ##b## _BIT))
+#define TEST(info, b) printf(#b " = %u\n", (bool) (info->compatibilityFlags & VK_GPU_COMPAT_ ##b## _BIT))
TEST(info, ASIC_FEATURES);
TEST(info, IQ_MATCH);
TEST(info, PEER_TRANSFER);
static void app_gpu_multi_compat(struct app_gpu *gpus, uint32_t gpu_count)
{
- XGL_RESULT err;
+ VK_RESULT err;
uint32_t i, j;
for (i = 0; i < gpu_count; i++) {
for (j = 0; j < gpu_count; j++) {
- XGL_GPU_COMPATIBILITY_INFO info;
+ VK_GPU_COMPATIBILITY_INFO info;
if (i == j)
continue;
- err = xglGetMultiGpuCompatibility(gpus[i].obj,
+ err = vkGetMultiGpuCompatibility(gpus[i].obj,
gpus[j].obj, &info);
if (err)
ERR_EXIT(err);
static void app_gpu_dump_props(const struct app_gpu *gpu)
{
- const XGL_PHYSICAL_GPU_PROPERTIES *props = &gpu->props;
+ const VK_PHYSICAL_GPU_PROPERTIES *props = &gpu->props;
- printf("XGL_PHYSICAL_GPU_PROPERTIES\n");
+ printf("VK_PHYSICAL_GPU_PROPERTIES\n");
printf("\tapiVersion = %u\n", props->apiVersion);
printf("\tdriverVersion = %u\n", props->driverVersion);
printf("\tvendorId = 0x%04x\n", props->vendorId);
printf("\tdeviceId = 0x%04x\n", props->deviceId);
- printf("\tgpuType = %s\n", xgl_gpu_type_string(props->gpuType));
+ printf("\tgpuType = %s\n", vk_gpu_type_string(props->gpuType));
printf("\tgpuName = %s\n", props->gpuName);
printf("\tmaxInlineMemoryUpdateSize = %zu\n", props->maxInlineMemoryUpdateSize);
printf("\tmaxBoundDescriptorSets = %u\n", props->maxBoundDescriptorSets);
static void app_gpu_dump_perf(const struct app_gpu *gpu)
{
- const XGL_PHYSICAL_GPU_PERFORMANCE *perf = &gpu->perf;
+ const VK_PHYSICAL_GPU_PERFORMANCE *perf = &gpu->perf;
- printf("XGL_PHYSICAL_GPU_PERFORMANCE\n");
+ printf("VK_PHYSICAL_GPU_PERFORMANCE\n");
printf("\tmaxGpuClock = %f\n", perf->maxGpuClock);
printf("\taluPerClock = %f\n", perf->aluPerClock);
printf("\ttexPerClock = %f\n", perf->texPerClock);
static void app_gpu_dump_queue_props(const struct app_gpu *gpu, uint32_t id)
{
- const XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *props = &gpu->queue_props[id];
+ const VK_PHYSICAL_GPU_QUEUE_PROPERTIES *props = &gpu->queue_props[id];
- printf("XGL_PHYSICAL_GPU_QUEUE_PROPERTIES[%d]\n", id);
+ printf("VK_PHYSICAL_GPU_QUEUE_PROPERTIES[%d]\n", id);
printf("\tqueueFlags = %c%c%c%c\n",
- (props->queueFlags & XGL_QUEUE_GRAPHICS_BIT) ? 'G' : '.',
- (props->queueFlags & XGL_QUEUE_COMPUTE_BIT) ? 'C' : '.',
- (props->queueFlags & XGL_QUEUE_DMA_BIT) ? 'D' : '.',
- (props->queueFlags & XGL_QUEUE_EXTENDED_BIT) ? 'X' : '.');
+ (props->queueFlags & VK_QUEUE_GRAPHICS_BIT) ? 'G' : '.',
+ (props->queueFlags & VK_QUEUE_COMPUTE_BIT) ? 'C' : '.',
+ (props->queueFlags & VK_QUEUE_DMA_BIT) ? 'D' : '.',
+ (props->queueFlags & VK_QUEUE_EXTENDED_BIT) ? 'X' : '.');
printf("\tqueueCount = %u\n", props->queueCount);
printf("\tmaxAtomicCounters = %u\n", props->maxAtomicCounters);
printf("\tsupportsTimestamps = %u\n", props->supportsTimestamps);
static void app_gpu_dump_memory_props(const struct app_gpu *gpu)
{
- const XGL_PHYSICAL_GPU_MEMORY_PROPERTIES *props = &gpu->memory_props;
+ const VK_PHYSICAL_GPU_MEMORY_PROPERTIES *props = &gpu->memory_props;
- printf("XGL_PHYSICAL_GPU_MEMORY_PROPERTIES\n");
+ printf("VK_PHYSICAL_GPU_MEMORY_PROPERTIES\n");
printf("\tsupportsMigration = %u\n", props->supportsMigration);
printf("\tsupportsPinning = %u\n", props->supportsPinning);
}
int main(int argc, char **argv)
{
- static const XGL_APPLICATION_INFO app_info = {
- .sType = XGL_STRUCTURE_TYPE_APPLICATION_INFO,
+ static const VK_APPLICATION_INFO app_info = {
+ .sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.pNext = NULL,
- .pAppName = "xglinfo",
+ .pAppName = "vkinfo",
.appVersion = 1,
- .pEngineName = "xglinfo",
+ .pEngineName = "vkinfo",
.engineVersion = 1,
- .apiVersion = XGL_API_VERSION,
+ .apiVersion = VK_API_VERSION,
};
- static const XGL_INSTANCE_CREATE_INFO inst_info = {
- .sType = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ static const VK_INSTANCE_CREATE_INFO inst_info = {
+ .sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
.pNext = NULL,
.pAppInfo = &app_info,
.pAllocCb = NULL,
.ppEnabledExtensionNames = NULL,
};
struct app_gpu gpus[MAX_GPUS];
- XGL_PHYSICAL_GPU objs[MAX_GPUS];
- XGL_INSTANCE inst;
+ VK_PHYSICAL_GPU objs[MAX_GPUS];
+ VK_INSTANCE inst;
uint32_t gpu_count, i;
- XGL_RESULT err;
+ VK_RESULT err;
- err = xglCreateInstance(&inst_info, &inst);
- if (err == XGL_ERROR_INCOMPATIBLE_DRIVER) {
+ err = vkCreateInstance(&inst_info, &inst);
+ if (err == VK_ERROR_INCOMPATIBLE_DRIVER) {
printf("Cannot find a compatible Vulkan installable client driver "
"(ICD).\nExiting ...\n");
fflush(stdout);
ERR_EXIT(err);
}
- err = xglEnumerateGpus(inst, MAX_GPUS, &gpu_count, objs);
+ err = vkEnumerateGpus(inst, MAX_GPUS, &gpu_count, objs);
if (err)
ERR_EXIT(err);
for (i = 0; i < gpu_count; i++)
app_gpu_destroy(&gpus[i]);
- xglDestroyInstance(inst);
+ vkDestroyInstance(inst);
return 0;
}
- [Implementation for Intel GPUs](intel)
- [Null driver](nulldrv)
- [*Sample Driver Tests*](../tests)
- - Now includes Golden images to verify xgl_render_tests rendering.
+ - Now includes Golden images to verify vk_render_tests rendering.
-common/ provides helper and utility functions, as well as all XGL entry points
-except xglInitAndEnumerateGpus. Hardware drivers are required to provide that
-function, and to embed a "XGL_LAYER_DISPATCH_TABLE *" as the first member of
-XGL_PHYSICAL_GPU and all XGL_BASE_OBJECT.
+common/ provides helper and utility functions, as well as all VK entry points
+except vkInitAndEnumerateGpus. Hardware drivers are required to provide that
+function, and to embed a "VK_LAYER_DISPATCH_TABLE *" as the first member of
+VK_PHYSICAL_GPU and all VK_BASE_OBJECT.
Thread safety
They require that there is no other thread calling the ICD when these
functions are called
- - xglInitAndEnumerateGpus
- - xglDbgRegisterMsgCallback
- - xglDbgUnregisterMsgCallback
- - xglDbgSetGlobalOption
+ - vkInitAndEnumerateGpus
+ - vkDbgRegisterMsgCallback
+ - vkDbgUnregisterMsgCallback
+ - vkDbgSetGlobalOption
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
return devices;
} else {
dev = icd_instance_alloc(instance, sizeof(*dev), 0,
- XGL_SYSTEM_ALLOC_INTERNAL_TEMP);
+ VK_SYSTEM_ALLOC_INTERNAL_TEMP);
if (!dev)
return devices;
udev = udev_new();
if (udev == NULL) {
- icd_instance_log(instance, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0,
- XGL_NULL_HANDLE, 0, 0, "failed to initialize udev context");
+ icd_instance_log(instance, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0,
+ VK_NULL_HANDLE, 0, 0, "failed to initialize udev context");
return NULL;
}
e = udev_enumerate_new(udev);
if (e == NULL) {
- icd_instance_log(instance, XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0,
- XGL_NULL_HANDLE, 0, 0,
+ icd_instance_log(instance, VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0,
+ VK_NULL_HANDLE, 0, 0,
"failed to initialize udev enumerate context");
udev_unref(udev);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
static const struct icd_format_info {
size_t size;
uint32_t channel_count;
-} icd_format_table[XGL_NUM_FMT] = {
- [XGL_FMT_UNDEFINED] = { 0, 0 },
- [XGL_FMT_R4G4_UNORM] = { 1, 2 },
- [XGL_FMT_R4G4_USCALED] = { 1, 2 },
- [XGL_FMT_R4G4B4A4_UNORM] = { 2, 4 },
- [XGL_FMT_R4G4B4A4_USCALED] = { 2, 4 },
- [XGL_FMT_R5G6B5_UNORM] = { 2, 3 },
- [XGL_FMT_R5G6B5_USCALED] = { 2, 3 },
- [XGL_FMT_R5G5B5A1_UNORM] = { 2, 4 },
- [XGL_FMT_R5G5B5A1_USCALED] = { 2, 4 },
- [XGL_FMT_R8_UNORM] = { 1, 1 },
- [XGL_FMT_R8_SNORM] = { 1, 1 },
- [XGL_FMT_R8_USCALED] = { 1, 1 },
- [XGL_FMT_R8_SSCALED] = { 1, 1 },
- [XGL_FMT_R8_UINT] = { 1, 1 },
- [XGL_FMT_R8_SINT] = { 1, 1 },
- [XGL_FMT_R8_SRGB] = { 1, 1 },
- [XGL_FMT_R8G8_UNORM] = { 2, 2 },
- [XGL_FMT_R8G8_SNORM] = { 2, 2 },
- [XGL_FMT_R8G8_USCALED] = { 2, 2 },
- [XGL_FMT_R8G8_SSCALED] = { 2, 2 },
- [XGL_FMT_R8G8_UINT] = { 2, 2 },
- [XGL_FMT_R8G8_SINT] = { 2, 2 },
- [XGL_FMT_R8G8_SRGB] = { 2, 2 },
- [XGL_FMT_R8G8B8_UNORM] = { 3, 3 },
- [XGL_FMT_R8G8B8_SNORM] = { 3, 3 },
- [XGL_FMT_R8G8B8_USCALED] = { 3, 3 },
- [XGL_FMT_R8G8B8_SSCALED] = { 3, 3 },
- [XGL_FMT_R8G8B8_UINT] = { 3, 3 },
- [XGL_FMT_R8G8B8_SINT] = { 3, 3 },
- [XGL_FMT_R8G8B8_SRGB] = { 3, 3 },
- [XGL_FMT_R8G8B8A8_UNORM] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SNORM] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_USCALED] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SSCALED] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_UINT] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SINT] = { 4, 4 },
- [XGL_FMT_R8G8B8A8_SRGB] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_UNORM] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SNORM] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_USCALED] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SSCALED] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_UINT] = { 4, 4 },
- [XGL_FMT_R10G10B10A2_SINT] = { 4, 4 },
- [XGL_FMT_R16_UNORM] = { 2, 1 },
- [XGL_FMT_R16_SNORM] = { 2, 1 },
- [XGL_FMT_R16_USCALED] = { 2, 1 },
- [XGL_FMT_R16_SSCALED] = { 2, 1 },
- [XGL_FMT_R16_UINT] = { 2, 1 },
- [XGL_FMT_R16_SINT] = { 2, 1 },
- [XGL_FMT_R16_SFLOAT] = { 2, 1 },
- [XGL_FMT_R16G16_UNORM] = { 4, 2 },
- [XGL_FMT_R16G16_SNORM] = { 4, 2 },
- [XGL_FMT_R16G16_USCALED] = { 4, 2 },
- [XGL_FMT_R16G16_SSCALED] = { 4, 2 },
- [XGL_FMT_R16G16_UINT] = { 4, 2 },
- [XGL_FMT_R16G16_SINT] = { 4, 2 },
- [XGL_FMT_R16G16_SFLOAT] = { 4, 2 },
- [XGL_FMT_R16G16B16_UNORM] = { 6, 3 },
- [XGL_FMT_R16G16B16_SNORM] = { 6, 3 },
- [XGL_FMT_R16G16B16_USCALED] = { 6, 3 },
- [XGL_FMT_R16G16B16_SSCALED] = { 6, 3 },
- [XGL_FMT_R16G16B16_UINT] = { 6, 3 },
- [XGL_FMT_R16G16B16_SINT] = { 6, 3 },
- [XGL_FMT_R16G16B16_SFLOAT] = { 6, 3 },
- [XGL_FMT_R16G16B16A16_UNORM] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SNORM] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_USCALED] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SSCALED] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_UINT] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SINT] = { 8, 4 },
- [XGL_FMT_R16G16B16A16_SFLOAT] = { 8, 4 },
- [XGL_FMT_R32_UINT] = { 4, 1 },
- [XGL_FMT_R32_SINT] = { 4, 1 },
- [XGL_FMT_R32_SFLOAT] = { 4, 1 },
- [XGL_FMT_R32G32_UINT] = { 8, 2 },
- [XGL_FMT_R32G32_SINT] = { 8, 2 },
- [XGL_FMT_R32G32_SFLOAT] = { 8, 2 },
- [XGL_FMT_R32G32B32_UINT] = { 12, 3 },
- [XGL_FMT_R32G32B32_SINT] = { 12, 3 },
- [XGL_FMT_R32G32B32_SFLOAT] = { 12, 3 },
- [XGL_FMT_R32G32B32A32_UINT] = { 16, 4 },
- [XGL_FMT_R32G32B32A32_SINT] = { 16, 4 },
- [XGL_FMT_R32G32B32A32_SFLOAT] = { 16, 4 },
- [XGL_FMT_R64_SFLOAT] = { 8, 1 },
- [XGL_FMT_R64G64_SFLOAT] = { 16, 2 },
- [XGL_FMT_R64G64B64_SFLOAT] = { 24, 3 },
- [XGL_FMT_R64G64B64A64_SFLOAT] = { 32, 4 },
- [XGL_FMT_R11G11B10_UFLOAT] = { 4, 3 },
- [XGL_FMT_R9G9B9E5_UFLOAT] = { 4, 3 },
- [XGL_FMT_D16_UNORM] = { 2, 1 },
- [XGL_FMT_D24_UNORM] = { 3, 1 },
- [XGL_FMT_D32_SFLOAT] = { 4, 1 },
- [XGL_FMT_S8_UINT] = { 1, 1 },
- [XGL_FMT_D16_UNORM_S8_UINT] = { 3, 2 },
- [XGL_FMT_D24_UNORM_S8_UINT] = { 4, 2 },
- [XGL_FMT_D32_SFLOAT_S8_UINT] = { 4, 2 },
- [XGL_FMT_BC1_RGB_UNORM] = { 8, 4 },
- [XGL_FMT_BC1_RGB_SRGB] = { 8, 4 },
- [XGL_FMT_BC1_RGBA_UNORM] = { 8, 4 },
- [XGL_FMT_BC1_RGBA_SRGB] = { 8, 4 },
- [XGL_FMT_BC2_UNORM] = { 16, 4 },
- [XGL_FMT_BC2_SRGB] = { 16, 4 },
- [XGL_FMT_BC3_UNORM] = { 16, 4 },
- [XGL_FMT_BC3_SRGB] = { 16, 4 },
- [XGL_FMT_BC4_UNORM] = { 8, 4 },
- [XGL_FMT_BC4_SNORM] = { 8, 4 },
- [XGL_FMT_BC5_UNORM] = { 16, 4 },
- [XGL_FMT_BC5_SNORM] = { 16, 4 },
- [XGL_FMT_BC6H_UFLOAT] = { 16, 4 },
- [XGL_FMT_BC6H_SFLOAT] = { 16, 4 },
- [XGL_FMT_BC7_UNORM] = { 16, 4 },
- [XGL_FMT_BC7_SRGB] = { 16, 4 },
+} icd_format_table[VK_NUM_FMT] = {
+ [VK_FMT_UNDEFINED] = { 0, 0 },
+ [VK_FMT_R4G4_UNORM] = { 1, 2 },
+ [VK_FMT_R4G4_USCALED] = { 1, 2 },
+ [VK_FMT_R4G4B4A4_UNORM] = { 2, 4 },
+ [VK_FMT_R4G4B4A4_USCALED] = { 2, 4 },
+ [VK_FMT_R5G6B5_UNORM] = { 2, 3 },
+ [VK_FMT_R5G6B5_USCALED] = { 2, 3 },
+ [VK_FMT_R5G5B5A1_UNORM] = { 2, 4 },
+ [VK_FMT_R5G5B5A1_USCALED] = { 2, 4 },
+ [VK_FMT_R8_UNORM] = { 1, 1 },
+ [VK_FMT_R8_SNORM] = { 1, 1 },
+ [VK_FMT_R8_USCALED] = { 1, 1 },
+ [VK_FMT_R8_SSCALED] = { 1, 1 },
+ [VK_FMT_R8_UINT] = { 1, 1 },
+ [VK_FMT_R8_SINT] = { 1, 1 },
+ [VK_FMT_R8_SRGB] = { 1, 1 },
+ [VK_FMT_R8G8_UNORM] = { 2, 2 },
+ [VK_FMT_R8G8_SNORM] = { 2, 2 },
+ [VK_FMT_R8G8_USCALED] = { 2, 2 },
+ [VK_FMT_R8G8_SSCALED] = { 2, 2 },
+ [VK_FMT_R8G8_UINT] = { 2, 2 },
+ [VK_FMT_R8G8_SINT] = { 2, 2 },
+ [VK_FMT_R8G8_SRGB] = { 2, 2 },
+ [VK_FMT_R8G8B8_UNORM] = { 3, 3 },
+ [VK_FMT_R8G8B8_SNORM] = { 3, 3 },
+ [VK_FMT_R8G8B8_USCALED] = { 3, 3 },
+ [VK_FMT_R8G8B8_SSCALED] = { 3, 3 },
+ [VK_FMT_R8G8B8_UINT] = { 3, 3 },
+ [VK_FMT_R8G8B8_SINT] = { 3, 3 },
+ [VK_FMT_R8G8B8_SRGB] = { 3, 3 },
+ [VK_FMT_R8G8B8A8_UNORM] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SNORM] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_USCALED] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SSCALED] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_UINT] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SINT] = { 4, 4 },
+ [VK_FMT_R8G8B8A8_SRGB] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_UNORM] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SNORM] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_USCALED] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SSCALED] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_UINT] = { 4, 4 },
+ [VK_FMT_R10G10B10A2_SINT] = { 4, 4 },
+ [VK_FMT_R16_UNORM] = { 2, 1 },
+ [VK_FMT_R16_SNORM] = { 2, 1 },
+ [VK_FMT_R16_USCALED] = { 2, 1 },
+ [VK_FMT_R16_SSCALED] = { 2, 1 },
+ [VK_FMT_R16_UINT] = { 2, 1 },
+ [VK_FMT_R16_SINT] = { 2, 1 },
+ [VK_FMT_R16_SFLOAT] = { 2, 1 },
+ [VK_FMT_R16G16_UNORM] = { 4, 2 },
+ [VK_FMT_R16G16_SNORM] = { 4, 2 },
+ [VK_FMT_R16G16_USCALED] = { 4, 2 },
+ [VK_FMT_R16G16_SSCALED] = { 4, 2 },
+ [VK_FMT_R16G16_UINT] = { 4, 2 },
+ [VK_FMT_R16G16_SINT] = { 4, 2 },
+ [VK_FMT_R16G16_SFLOAT] = { 4, 2 },
+ [VK_FMT_R16G16B16_UNORM] = { 6, 3 },
+ [VK_FMT_R16G16B16_SNORM] = { 6, 3 },
+ [VK_FMT_R16G16B16_USCALED] = { 6, 3 },
+ [VK_FMT_R16G16B16_SSCALED] = { 6, 3 },
+ [VK_FMT_R16G16B16_UINT] = { 6, 3 },
+ [VK_FMT_R16G16B16_SINT] = { 6, 3 },
+ [VK_FMT_R16G16B16_SFLOAT] = { 6, 3 },
+ [VK_FMT_R16G16B16A16_UNORM] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SNORM] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_USCALED] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SSCALED] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_UINT] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SINT] = { 8, 4 },
+ [VK_FMT_R16G16B16A16_SFLOAT] = { 8, 4 },
+ [VK_FMT_R32_UINT] = { 4, 1 },
+ [VK_FMT_R32_SINT] = { 4, 1 },
+ [VK_FMT_R32_SFLOAT] = { 4, 1 },
+ [VK_FMT_R32G32_UINT] = { 8, 2 },
+ [VK_FMT_R32G32_SINT] = { 8, 2 },
+ [VK_FMT_R32G32_SFLOAT] = { 8, 2 },
+ [VK_FMT_R32G32B32_UINT] = { 12, 3 },
+ [VK_FMT_R32G32B32_SINT] = { 12, 3 },
+ [VK_FMT_R32G32B32_SFLOAT] = { 12, 3 },
+ [VK_FMT_R32G32B32A32_UINT] = { 16, 4 },
+ [VK_FMT_R32G32B32A32_SINT] = { 16, 4 },
+ [VK_FMT_R32G32B32A32_SFLOAT] = { 16, 4 },
+ [VK_FMT_R64_SFLOAT] = { 8, 1 },
+ [VK_FMT_R64G64_SFLOAT] = { 16, 2 },
+ [VK_FMT_R64G64B64_SFLOAT] = { 24, 3 },
+ [VK_FMT_R64G64B64A64_SFLOAT] = { 32, 4 },
+ [VK_FMT_R11G11B10_UFLOAT] = { 4, 3 },
+ [VK_FMT_R9G9B9E5_UFLOAT] = { 4, 3 },
+ [VK_FMT_D16_UNORM] = { 2, 1 },
+ [VK_FMT_D24_UNORM] = { 3, 1 },
+ [VK_FMT_D32_SFLOAT] = { 4, 1 },
+ [VK_FMT_S8_UINT] = { 1, 1 },
+ [VK_FMT_D16_UNORM_S8_UINT] = { 3, 2 },
+ [VK_FMT_D24_UNORM_S8_UINT] = { 4, 2 },
+ [VK_FMT_D32_SFLOAT_S8_UINT] = { 4, 2 },
+ [VK_FMT_BC1_RGB_UNORM] = { 8, 4 },
+ [VK_FMT_BC1_RGB_SRGB] = { 8, 4 },
+ [VK_FMT_BC1_RGBA_UNORM] = { 8, 4 },
+ [VK_FMT_BC1_RGBA_SRGB] = { 8, 4 },
+ [VK_FMT_BC2_UNORM] = { 16, 4 },
+ [VK_FMT_BC2_SRGB] = { 16, 4 },
+ [VK_FMT_BC3_UNORM] = { 16, 4 },
+ [VK_FMT_BC3_SRGB] = { 16, 4 },
+ [VK_FMT_BC4_UNORM] = { 8, 4 },
+ [VK_FMT_BC4_SNORM] = { 8, 4 },
+ [VK_FMT_BC5_UNORM] = { 16, 4 },
+ [VK_FMT_BC5_SNORM] = { 16, 4 },
+ [VK_FMT_BC6H_UFLOAT] = { 16, 4 },
+ [VK_FMT_BC6H_SFLOAT] = { 16, 4 },
+ [VK_FMT_BC7_UNORM] = { 16, 4 },
+ [VK_FMT_BC7_SRGB] = { 16, 4 },
/* TODO: Initialize remaining compressed formats. */
- [XGL_FMT_ETC2_R8G8B8_UNORM] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8A1_UNORM] = { 0, 0 },
- [XGL_FMT_ETC2_R8G8B8A8_UNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11_UNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11_SNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11G11_UNORM] = { 0, 0 },
- [XGL_FMT_EAC_R11G11_SNORM] = { 0, 0 },
- [XGL_FMT_ASTC_4x4_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_4x4_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_5x4_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_5x4_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_5x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_5x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_6x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_6x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_6x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_6x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_8x8_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_8x8_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x5_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x5_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x6_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x6_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x8_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x8_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_10x10_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_10x10_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_12x10_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_12x10_SRGB] = { 0, 0 },
- [XGL_FMT_ASTC_12x12_UNORM] = { 0, 0 },
- [XGL_FMT_ASTC_12x12_SRGB] = { 0, 0 },
- [XGL_FMT_B5G6R5_UNORM] = { 2, 3 },
- [XGL_FMT_B5G6R5_USCALED] = { 2, 3 },
- [XGL_FMT_B8G8R8_UNORM] = { 3, 3 },
- [XGL_FMT_B8G8R8_SNORM] = { 3, 3 },
- [XGL_FMT_B8G8R8_USCALED] = { 3, 3 },
- [XGL_FMT_B8G8R8_SSCALED] = { 3, 3 },
- [XGL_FMT_B8G8R8_UINT] = { 3, 3 },
- [XGL_FMT_B8G8R8_SINT] = { 3, 3 },
- [XGL_FMT_B8G8R8_SRGB] = { 3, 3 },
- [XGL_FMT_B8G8R8A8_UNORM] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SNORM] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_USCALED] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SSCALED] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_UINT] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SINT] = { 4, 4 },
- [XGL_FMT_B8G8R8A8_SRGB] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_UNORM] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SNORM] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_USCALED] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SSCALED] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_UINT] = { 4, 4 },
- [XGL_FMT_B10G10R10A2_SINT] = { 4, 4 },
+ [VK_FMT_ETC2_R8G8B8_UNORM] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8A1_UNORM] = { 0, 0 },
+ [VK_FMT_ETC2_R8G8B8A8_UNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11_UNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11_SNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11G11_UNORM] = { 0, 0 },
+ [VK_FMT_EAC_R11G11_SNORM] = { 0, 0 },
+ [VK_FMT_ASTC_4x4_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_4x4_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_5x4_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_5x4_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_5x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_5x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_6x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_6x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_6x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_6x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_8x8_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_8x8_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x5_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x5_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x6_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x6_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x8_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x8_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_10x10_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_10x10_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_12x10_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_12x10_SRGB] = { 0, 0 },
+ [VK_FMT_ASTC_12x12_UNORM] = { 0, 0 },
+ [VK_FMT_ASTC_12x12_SRGB] = { 0, 0 },
+ [VK_FMT_B5G6R5_UNORM] = { 2, 3 },
+ [VK_FMT_B5G6R5_USCALED] = { 2, 3 },
+ [VK_FMT_B8G8R8_UNORM] = { 3, 3 },
+ [VK_FMT_B8G8R8_SNORM] = { 3, 3 },
+ [VK_FMT_B8G8R8_USCALED] = { 3, 3 },
+ [VK_FMT_B8G8R8_SSCALED] = { 3, 3 },
+ [VK_FMT_B8G8R8_UINT] = { 3, 3 },
+ [VK_FMT_B8G8R8_SINT] = { 3, 3 },
+ [VK_FMT_B8G8R8_SRGB] = { 3, 3 },
+ [VK_FMT_B8G8R8A8_UNORM] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SNORM] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_USCALED] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SSCALED] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_UINT] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SINT] = { 4, 4 },
+ [VK_FMT_B8G8R8A8_SRGB] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_UNORM] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SNORM] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_USCALED] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SSCALED] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_UINT] = { 4, 4 },
+ [VK_FMT_B10G10R10A2_SINT] = { 4, 4 },
};
-bool icd_format_is_ds(XGL_FORMAT format)
+bool icd_format_is_ds(VK_FORMAT format)
{
bool is_ds = false;
switch (format) {
- case XGL_FMT_D16_UNORM:
- case XGL_FMT_D24_UNORM:
- case XGL_FMT_D32_SFLOAT:
- case XGL_FMT_S8_UINT:
- case XGL_FMT_D16_UNORM_S8_UINT:
- case XGL_FMT_D24_UNORM_S8_UINT:
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D16_UNORM:
+ case VK_FMT_D24_UNORM:
+ case VK_FMT_D32_SFLOAT:
+ case VK_FMT_S8_UINT:
+ case VK_FMT_D16_UNORM_S8_UINT:
+ case VK_FMT_D24_UNORM_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
is_ds = true;
break;
default:
return is_ds;
}
-bool icd_format_is_norm(XGL_FORMAT format)
+bool icd_format_is_norm(VK_FORMAT format)
{
bool is_norm = false;
switch (format) {
- case XGL_FMT_R4G4_UNORM:
- case XGL_FMT_R4G4B4A4_UNORM:
- case XGL_FMT_R5G6B5_UNORM:
- case XGL_FMT_R5G5B5A1_UNORM:
- case XGL_FMT_R8_UNORM:
- case XGL_FMT_R8_SNORM:
- case XGL_FMT_R8G8_UNORM:
- case XGL_FMT_R8G8_SNORM:
- case XGL_FMT_R8G8B8_UNORM:
- case XGL_FMT_R8G8B8_SNORM:
- case XGL_FMT_R8G8B8A8_UNORM:
- case XGL_FMT_R8G8B8A8_SNORM:
- case XGL_FMT_R10G10B10A2_UNORM:
- case XGL_FMT_R10G10B10A2_SNORM:
- case XGL_FMT_R16_UNORM:
- case XGL_FMT_R16_SNORM:
- case XGL_FMT_R16G16_UNORM:
- case XGL_FMT_R16G16_SNORM:
- case XGL_FMT_R16G16B16_UNORM:
- case XGL_FMT_R16G16B16_SNORM:
- case XGL_FMT_R16G16B16A16_UNORM:
- case XGL_FMT_R16G16B16A16_SNORM:
- case XGL_FMT_BC1_RGB_UNORM:
- case XGL_FMT_BC2_UNORM:
- case XGL_FMT_BC3_UNORM:
- case XGL_FMT_BC4_UNORM:
- case XGL_FMT_BC4_SNORM:
- case XGL_FMT_BC5_UNORM:
- case XGL_FMT_BC5_SNORM:
- case XGL_FMT_BC7_UNORM:
- case XGL_FMT_ETC2_R8G8B8_UNORM:
- case XGL_FMT_ETC2_R8G8B8A1_UNORM:
- case XGL_FMT_ETC2_R8G8B8A8_UNORM:
- case XGL_FMT_EAC_R11_UNORM:
- case XGL_FMT_EAC_R11_SNORM:
- case XGL_FMT_EAC_R11G11_UNORM:
- case XGL_FMT_EAC_R11G11_SNORM:
- case XGL_FMT_ASTC_4x4_UNORM:
- case XGL_FMT_ASTC_5x4_UNORM:
- case XGL_FMT_ASTC_5x5_UNORM:
- case XGL_FMT_ASTC_6x5_UNORM:
- case XGL_FMT_ASTC_6x6_UNORM:
- case XGL_FMT_ASTC_8x5_UNORM:
- case XGL_FMT_ASTC_8x6_UNORM:
- case XGL_FMT_ASTC_8x8_UNORM:
- case XGL_FMT_ASTC_10x5_UNORM:
- case XGL_FMT_ASTC_10x6_UNORM:
- case XGL_FMT_ASTC_10x8_UNORM:
- case XGL_FMT_ASTC_10x10_UNORM:
- case XGL_FMT_ASTC_12x10_UNORM:
- case XGL_FMT_ASTC_12x12_UNORM:
- case XGL_FMT_B5G6R5_UNORM:
- case XGL_FMT_B8G8R8_UNORM:
- case XGL_FMT_B8G8R8_SNORM:
- case XGL_FMT_B8G8R8A8_UNORM:
- case XGL_FMT_B8G8R8A8_SNORM:
- case XGL_FMT_B10G10R10A2_UNORM:
- case XGL_FMT_B10G10R10A2_SNORM:
+ case VK_FMT_R4G4_UNORM:
+ case VK_FMT_R4G4B4A4_UNORM:
+ case VK_FMT_R5G6B5_UNORM:
+ case VK_FMT_R5G5B5A1_UNORM:
+ case VK_FMT_R8_UNORM:
+ case VK_FMT_R8_SNORM:
+ case VK_FMT_R8G8_UNORM:
+ case VK_FMT_R8G8_SNORM:
+ case VK_FMT_R8G8B8_UNORM:
+ case VK_FMT_R8G8B8_SNORM:
+ case VK_FMT_R8G8B8A8_UNORM:
+ case VK_FMT_R8G8B8A8_SNORM:
+ case VK_FMT_R10G10B10A2_UNORM:
+ case VK_FMT_R10G10B10A2_SNORM:
+ case VK_FMT_R16_UNORM:
+ case VK_FMT_R16_SNORM:
+ case VK_FMT_R16G16_UNORM:
+ case VK_FMT_R16G16_SNORM:
+ case VK_FMT_R16G16B16_UNORM:
+ case VK_FMT_R16G16B16_SNORM:
+ case VK_FMT_R16G16B16A16_UNORM:
+ case VK_FMT_R16G16B16A16_SNORM:
+ case VK_FMT_BC1_RGB_UNORM:
+ case VK_FMT_BC2_UNORM:
+ case VK_FMT_BC3_UNORM:
+ case VK_FMT_BC4_UNORM:
+ case VK_FMT_BC4_SNORM:
+ case VK_FMT_BC5_UNORM:
+ case VK_FMT_BC5_SNORM:
+ case VK_FMT_BC7_UNORM:
+ case VK_FMT_ETC2_R8G8B8_UNORM:
+ case VK_FMT_ETC2_R8G8B8A1_UNORM:
+ case VK_FMT_ETC2_R8G8B8A8_UNORM:
+ case VK_FMT_EAC_R11_UNORM:
+ case VK_FMT_EAC_R11_SNORM:
+ case VK_FMT_EAC_R11G11_UNORM:
+ case VK_FMT_EAC_R11G11_SNORM:
+ case VK_FMT_ASTC_4x4_UNORM:
+ case VK_FMT_ASTC_5x4_UNORM:
+ case VK_FMT_ASTC_5x5_UNORM:
+ case VK_FMT_ASTC_6x5_UNORM:
+ case VK_FMT_ASTC_6x6_UNORM:
+ case VK_FMT_ASTC_8x5_UNORM:
+ case VK_FMT_ASTC_8x6_UNORM:
+ case VK_FMT_ASTC_8x8_UNORM:
+ case VK_FMT_ASTC_10x5_UNORM:
+ case VK_FMT_ASTC_10x6_UNORM:
+ case VK_FMT_ASTC_10x8_UNORM:
+ case VK_FMT_ASTC_10x10_UNORM:
+ case VK_FMT_ASTC_12x10_UNORM:
+ case VK_FMT_ASTC_12x12_UNORM:
+ case VK_FMT_B5G6R5_UNORM:
+ case VK_FMT_B8G8R8_UNORM:
+ case VK_FMT_B8G8R8_SNORM:
+ case VK_FMT_B8G8R8A8_UNORM:
+ case VK_FMT_B8G8R8A8_SNORM:
+ case VK_FMT_B10G10R10A2_UNORM:
+ case VK_FMT_B10G10R10A2_SNORM:
is_norm = true;
break;
default:
return is_norm;
};
-bool icd_format_is_int(XGL_FORMAT format)
+bool icd_format_is_int(VK_FORMAT format)
{
bool is_int = false;
switch (format) {
- case XGL_FMT_R8_UINT:
- case XGL_FMT_R8_SINT:
- case XGL_FMT_R8G8_UINT:
- case XGL_FMT_R8G8_SINT:
- case XGL_FMT_R8G8B8_UINT:
- case XGL_FMT_R8G8B8_SINT:
- case XGL_FMT_R8G8B8A8_UINT:
- case XGL_FMT_R8G8B8A8_SINT:
- case XGL_FMT_R10G10B10A2_UINT:
- case XGL_FMT_R10G10B10A2_SINT:
- case XGL_FMT_R16_UINT:
- case XGL_FMT_R16_SINT:
- case XGL_FMT_R16G16_UINT:
- case XGL_FMT_R16G16_SINT:
- case XGL_FMT_R16G16B16_UINT:
- case XGL_FMT_R16G16B16_SINT:
- case XGL_FMT_R16G16B16A16_UINT:
- case XGL_FMT_R16G16B16A16_SINT:
- case XGL_FMT_R32_UINT:
- case XGL_FMT_R32_SINT:
- case XGL_FMT_R32G32_UINT:
- case XGL_FMT_R32G32_SINT:
- case XGL_FMT_R32G32B32_UINT:
- case XGL_FMT_R32G32B32_SINT:
- case XGL_FMT_R32G32B32A32_UINT:
- case XGL_FMT_R32G32B32A32_SINT:
- case XGL_FMT_B8G8R8_UINT:
- case XGL_FMT_B8G8R8_SINT:
- case XGL_FMT_B8G8R8A8_UINT:
- case XGL_FMT_B8G8R8A8_SINT:
- case XGL_FMT_B10G10R10A2_UINT:
- case XGL_FMT_B10G10R10A2_SINT:
+ case VK_FMT_R8_UINT:
+ case VK_FMT_R8_SINT:
+ case VK_FMT_R8G8_UINT:
+ case VK_FMT_R8G8_SINT:
+ case VK_FMT_R8G8B8_UINT:
+ case VK_FMT_R8G8B8_SINT:
+ case VK_FMT_R8G8B8A8_UINT:
+ case VK_FMT_R8G8B8A8_SINT:
+ case VK_FMT_R10G10B10A2_UINT:
+ case VK_FMT_R10G10B10A2_SINT:
+ case VK_FMT_R16_UINT:
+ case VK_FMT_R16_SINT:
+ case VK_FMT_R16G16_UINT:
+ case VK_FMT_R16G16_SINT:
+ case VK_FMT_R16G16B16_UINT:
+ case VK_FMT_R16G16B16_SINT:
+ case VK_FMT_R16G16B16A16_UINT:
+ case VK_FMT_R16G16B16A16_SINT:
+ case VK_FMT_R32_UINT:
+ case VK_FMT_R32_SINT:
+ case VK_FMT_R32G32_UINT:
+ case VK_FMT_R32G32_SINT:
+ case VK_FMT_R32G32B32_UINT:
+ case VK_FMT_R32G32B32_SINT:
+ case VK_FMT_R32G32B32A32_UINT:
+ case VK_FMT_R32G32B32A32_SINT:
+ case VK_FMT_B8G8R8_UINT:
+ case VK_FMT_B8G8R8_SINT:
+ case VK_FMT_B8G8R8A8_UINT:
+ case VK_FMT_B8G8R8A8_SINT:
+ case VK_FMT_B10G10R10A2_UINT:
+ case VK_FMT_B10G10R10A2_SINT:
is_int = true;
break;
default:
return is_int;
}
-bool icd_format_is_float(XGL_FORMAT format)
+bool icd_format_is_float(VK_FORMAT format)
{
bool is_float = false;
switch (format) {
- case XGL_FMT_R16_SFLOAT:
- case XGL_FMT_R16G16_SFLOAT:
- case XGL_FMT_R16G16B16_SFLOAT:
- case XGL_FMT_R16G16B16A16_SFLOAT:
- case XGL_FMT_R32_SFLOAT:
- case XGL_FMT_R32G32_SFLOAT:
- case XGL_FMT_R32G32B32_SFLOAT:
- case XGL_FMT_R32G32B32A32_SFLOAT:
- case XGL_FMT_R64_SFLOAT:
- case XGL_FMT_R64G64_SFLOAT:
- case XGL_FMT_R64G64B64_SFLOAT:
- case XGL_FMT_R64G64B64A64_SFLOAT:
- case XGL_FMT_R11G11B10_UFLOAT:
- case XGL_FMT_R9G9B9E5_UFLOAT:
- case XGL_FMT_BC6H_UFLOAT:
- case XGL_FMT_BC6H_SFLOAT:
+ case VK_FMT_R16_SFLOAT:
+ case VK_FMT_R16G16_SFLOAT:
+ case VK_FMT_R16G16B16_SFLOAT:
+ case VK_FMT_R16G16B16A16_SFLOAT:
+ case VK_FMT_R32_SFLOAT:
+ case VK_FMT_R32G32_SFLOAT:
+ case VK_FMT_R32G32B32_SFLOAT:
+ case VK_FMT_R32G32B32A32_SFLOAT:
+ case VK_FMT_R64_SFLOAT:
+ case VK_FMT_R64G64_SFLOAT:
+ case VK_FMT_R64G64B64_SFLOAT:
+ case VK_FMT_R64G64B64A64_SFLOAT:
+ case VK_FMT_R11G11B10_UFLOAT:
+ case VK_FMT_R9G9B9E5_UFLOAT:
+ case VK_FMT_BC6H_UFLOAT:
+ case VK_FMT_BC6H_SFLOAT:
is_float = true;
break;
default:
return is_float;
}
-bool icd_format_is_srgb(XGL_FORMAT format)
+bool icd_format_is_srgb(VK_FORMAT format)
{
bool is_srgb = false;
switch (format) {
- case XGL_FMT_R8_SRGB:
- case XGL_FMT_R8G8_SRGB:
- case XGL_FMT_R8G8B8_SRGB:
- case XGL_FMT_R8G8B8A8_SRGB:
- case XGL_FMT_BC1_RGB_SRGB:
- case XGL_FMT_BC2_SRGB:
- case XGL_FMT_BC3_SRGB:
- case XGL_FMT_BC7_SRGB:
- case XGL_FMT_ASTC_4x4_SRGB:
- case XGL_FMT_ASTC_5x4_SRGB:
- case XGL_FMT_ASTC_5x5_SRGB:
- case XGL_FMT_ASTC_6x5_SRGB:
- case XGL_FMT_ASTC_6x6_SRGB:
- case XGL_FMT_ASTC_8x5_SRGB:
- case XGL_FMT_ASTC_8x6_SRGB:
- case XGL_FMT_ASTC_8x8_SRGB:
- case XGL_FMT_ASTC_10x5_SRGB:
- case XGL_FMT_ASTC_10x6_SRGB:
- case XGL_FMT_ASTC_10x8_SRGB:
- case XGL_FMT_ASTC_10x10_SRGB:
- case XGL_FMT_ASTC_12x10_SRGB:
- case XGL_FMT_ASTC_12x12_SRGB:
- case XGL_FMT_B8G8R8_SRGB:
- case XGL_FMT_B8G8R8A8_SRGB:
+ case VK_FMT_R8_SRGB:
+ case VK_FMT_R8G8_SRGB:
+ case VK_FMT_R8G8B8_SRGB:
+ case VK_FMT_R8G8B8A8_SRGB:
+ case VK_FMT_BC1_RGB_SRGB:
+ case VK_FMT_BC2_SRGB:
+ case VK_FMT_BC3_SRGB:
+ case VK_FMT_BC7_SRGB:
+ case VK_FMT_ASTC_4x4_SRGB:
+ case VK_FMT_ASTC_5x4_SRGB:
+ case VK_FMT_ASTC_5x5_SRGB:
+ case VK_FMT_ASTC_6x5_SRGB:
+ case VK_FMT_ASTC_6x6_SRGB:
+ case VK_FMT_ASTC_8x5_SRGB:
+ case VK_FMT_ASTC_8x6_SRGB:
+ case VK_FMT_ASTC_8x8_SRGB:
+ case VK_FMT_ASTC_10x5_SRGB:
+ case VK_FMT_ASTC_10x6_SRGB:
+ case VK_FMT_ASTC_10x8_SRGB:
+ case VK_FMT_ASTC_10x10_SRGB:
+ case VK_FMT_ASTC_12x10_SRGB:
+ case VK_FMT_ASTC_12x12_SRGB:
+ case VK_FMT_B8G8R8_SRGB:
+ case VK_FMT_B8G8R8A8_SRGB:
is_srgb = true;
break;
default:
return is_srgb;
}
-bool icd_format_is_compressed(XGL_FORMAT format)
+bool icd_format_is_compressed(VK_FORMAT format)
{
switch (format) {
- case XGL_FMT_BC1_RGB_UNORM:
- case XGL_FMT_BC1_RGB_SRGB:
- case XGL_FMT_BC2_UNORM:
- case XGL_FMT_BC2_SRGB:
- case XGL_FMT_BC3_UNORM:
- case XGL_FMT_BC3_SRGB:
- case XGL_FMT_BC4_UNORM:
- case XGL_FMT_BC4_SNORM:
- case XGL_FMT_BC5_UNORM:
- case XGL_FMT_BC5_SNORM:
- case XGL_FMT_BC6H_UFLOAT:
- case XGL_FMT_BC6H_SFLOAT:
- case XGL_FMT_BC7_UNORM:
- case XGL_FMT_BC7_SRGB:
- case XGL_FMT_ETC2_R8G8B8_UNORM:
- case XGL_FMT_ETC2_R8G8B8A1_UNORM:
- case XGL_FMT_ETC2_R8G8B8A8_UNORM:
- case XGL_FMT_EAC_R11_UNORM:
- case XGL_FMT_EAC_R11_SNORM:
- case XGL_FMT_EAC_R11G11_UNORM:
- case XGL_FMT_EAC_R11G11_SNORM:
- case XGL_FMT_ASTC_4x4_UNORM:
- case XGL_FMT_ASTC_4x4_SRGB:
- case XGL_FMT_ASTC_5x4_UNORM:
- case XGL_FMT_ASTC_5x4_SRGB:
- case XGL_FMT_ASTC_5x5_UNORM:
- case XGL_FMT_ASTC_5x5_SRGB:
- case XGL_FMT_ASTC_6x5_UNORM:
- case XGL_FMT_ASTC_6x5_SRGB:
- case XGL_FMT_ASTC_6x6_UNORM:
- case XGL_FMT_ASTC_6x6_SRGB:
- case XGL_FMT_ASTC_8x5_UNORM:
- case XGL_FMT_ASTC_8x5_SRGB:
- case XGL_FMT_ASTC_8x6_UNORM:
- case XGL_FMT_ASTC_8x6_SRGB:
- case XGL_FMT_ASTC_8x8_UNORM:
- case XGL_FMT_ASTC_8x8_SRGB:
- case XGL_FMT_ASTC_10x5_UNORM:
- case XGL_FMT_ASTC_10x5_SRGB:
- case XGL_FMT_ASTC_10x6_UNORM:
- case XGL_FMT_ASTC_10x6_SRGB:
- case XGL_FMT_ASTC_10x8_UNORM:
- case XGL_FMT_ASTC_10x8_SRGB:
- case XGL_FMT_ASTC_10x10_UNORM:
- case XGL_FMT_ASTC_10x10_SRGB:
- case XGL_FMT_ASTC_12x10_UNORM:
- case XGL_FMT_ASTC_12x10_SRGB:
- case XGL_FMT_ASTC_12x12_UNORM:
- case XGL_FMT_ASTC_12x12_SRGB:
+ case VK_FMT_BC1_RGB_UNORM:
+ case VK_FMT_BC1_RGB_SRGB:
+ case VK_FMT_BC2_UNORM:
+ case VK_FMT_BC2_SRGB:
+ case VK_FMT_BC3_UNORM:
+ case VK_FMT_BC3_SRGB:
+ case VK_FMT_BC4_UNORM:
+ case VK_FMT_BC4_SNORM:
+ case VK_FMT_BC5_UNORM:
+ case VK_FMT_BC5_SNORM:
+ case VK_FMT_BC6H_UFLOAT:
+ case VK_FMT_BC6H_SFLOAT:
+ case VK_FMT_BC7_UNORM:
+ case VK_FMT_BC7_SRGB:
+ case VK_FMT_ETC2_R8G8B8_UNORM:
+ case VK_FMT_ETC2_R8G8B8A1_UNORM:
+ case VK_FMT_ETC2_R8G8B8A8_UNORM:
+ case VK_FMT_EAC_R11_UNORM:
+ case VK_FMT_EAC_R11_SNORM:
+ case VK_FMT_EAC_R11G11_UNORM:
+ case VK_FMT_EAC_R11G11_SNORM:
+ case VK_FMT_ASTC_4x4_UNORM:
+ case VK_FMT_ASTC_4x4_SRGB:
+ case VK_FMT_ASTC_5x4_UNORM:
+ case VK_FMT_ASTC_5x4_SRGB:
+ case VK_FMT_ASTC_5x5_UNORM:
+ case VK_FMT_ASTC_5x5_SRGB:
+ case VK_FMT_ASTC_6x5_UNORM:
+ case VK_FMT_ASTC_6x5_SRGB:
+ case VK_FMT_ASTC_6x6_UNORM:
+ case VK_FMT_ASTC_6x6_SRGB:
+ case VK_FMT_ASTC_8x5_UNORM:
+ case VK_FMT_ASTC_8x5_SRGB:
+ case VK_FMT_ASTC_8x6_UNORM:
+ case VK_FMT_ASTC_8x6_SRGB:
+ case VK_FMT_ASTC_8x8_UNORM:
+ case VK_FMT_ASTC_8x8_SRGB:
+ case VK_FMT_ASTC_10x5_UNORM:
+ case VK_FMT_ASTC_10x5_SRGB:
+ case VK_FMT_ASTC_10x6_UNORM:
+ case VK_FMT_ASTC_10x6_SRGB:
+ case VK_FMT_ASTC_10x8_UNORM:
+ case VK_FMT_ASTC_10x8_SRGB:
+ case VK_FMT_ASTC_10x10_UNORM:
+ case VK_FMT_ASTC_10x10_SRGB:
+ case VK_FMT_ASTC_12x10_UNORM:
+ case VK_FMT_ASTC_12x10_SRGB:
+ case VK_FMT_ASTC_12x12_UNORM:
+ case VK_FMT_ASTC_12x12_SRGB:
return true;
default:
return false;
}
}
-size_t icd_format_get_size(XGL_FORMAT format)
+size_t icd_format_get_size(VK_FORMAT format)
{
return icd_format_table[format].size;
}
-XGL_IMAGE_FORMAT_CLASS icd_format_get_class(XGL_FORMAT format)
+VK_IMAGE_FORMAT_CLASS icd_format_get_class(VK_FORMAT format)
{
if (icd_format_is_undef(format))
assert(!"undefined format");
if (icd_format_is_compressed(format)) {
switch (icd_format_get_size(format)) {
case 8:
- return XGL_IMAGE_FORMAT_CLASS_64_BIT_BLOCK;
+ return VK_IMAGE_FORMAT_CLASS_64_BIT_BLOCK;
case 16:
- return XGL_IMAGE_FORMAT_CLASS_128_BIT_BLOCK;
+ return VK_IMAGE_FORMAT_CLASS_128_BIT_BLOCK;
default:
assert(!"illegal compressed format");
}
} else if (icd_format_is_ds(format)) {
switch (icd_format_get_size(format)) {
case 1:
- return XGL_IMAGE_FORMAT_CLASS_S8;
+ return VK_IMAGE_FORMAT_CLASS_S8;
case 2:
- return XGL_IMAGE_FORMAT_CLASS_D16;
+ return VK_IMAGE_FORMAT_CLASS_D16;
case 3:
switch (icd_format_get_channel_count(format)) {
case 1:
- return XGL_IMAGE_FORMAT_CLASS_D24;
+ return VK_IMAGE_FORMAT_CLASS_D24;
case 2:
- return XGL_IMAGE_FORMAT_CLASS_D16S8;
+ return VK_IMAGE_FORMAT_CLASS_D16S8;
default:
assert(!"illegal depth stencil format channels");
}
case 4:
switch (icd_format_get_channel_count(format)) {
case 1:
- return XGL_IMAGE_FORMAT_CLASS_D32;
+ return VK_IMAGE_FORMAT_CLASS_D32;
case 2:
- return XGL_IMAGE_FORMAT_CLASS_D24S8;
+ return VK_IMAGE_FORMAT_CLASS_D24S8;
default:
assert(!"illegal depth stencil format channels");
}
case 5:
- return XGL_IMAGE_FORMAT_CLASS_D32S8;
+ return VK_IMAGE_FORMAT_CLASS_D32S8;
default:
assert(!"illegal depth stencil format");
}
} else { /* uncompressed color format */
switch (icd_format_get_size(format)) {
case 1:
- return XGL_IMAGE_FORMAT_CLASS_8_BITS;
+ return VK_IMAGE_FORMAT_CLASS_8_BITS;
case 2:
- return XGL_IMAGE_FORMAT_CLASS_16_BITS;
+ return VK_IMAGE_FORMAT_CLASS_16_BITS;
case 3:
- return XGL_IMAGE_FORMAT_CLASS_24_BITS;
+ return VK_IMAGE_FORMAT_CLASS_24_BITS;
case 4:
- return XGL_IMAGE_FORMAT_CLASS_32_BITS;
+ return VK_IMAGE_FORMAT_CLASS_32_BITS;
case 6:
- return XGL_IMAGE_FORMAT_CLASS_48_BITS;
+ return VK_IMAGE_FORMAT_CLASS_48_BITS;
case 8:
- return XGL_IMAGE_FORMAT_CLASS_64_BITS;
+ return VK_IMAGE_FORMAT_CLASS_64_BITS;
case 12:
- return XGL_IMAGE_FORMAT_CLASS_96_BITS;
+ return VK_IMAGE_FORMAT_CLASS_96_BITS;
case 16:
- return XGL_IMAGE_FORMAT_CLASS_128_BITS;
+ return VK_IMAGE_FORMAT_CLASS_128_BITS;
default:
assert(!"illegal uncompressed color format");
}
}
}
-unsigned int icd_format_get_channel_count(XGL_FORMAT format)
+unsigned int icd_format_get_channel_count(VK_FORMAT format)
{
return icd_format_table[format].channel_count;
}
* Convert a raw RGBA color to a raw value. \p value must have at least
* icd_format_get_size(format) bytes.
*/
-void icd_format_get_raw_value(XGL_FORMAT format,
+void icd_format_get_raw_value(VK_FORMAT format,
const uint32_t color[4],
void *value)
{
/* assume little-endian */
switch (format) {
- case XGL_FMT_UNDEFINED:
+ case VK_FMT_UNDEFINED:
break;
- case XGL_FMT_R4G4_UNORM:
- case XGL_FMT_R4G4_USCALED:
+ case VK_FMT_R4G4_UNORM:
+ case VK_FMT_R4G4_USCALED:
((uint8_t *) value)[0] = (color[0] & 0xf) << 0 |
(color[1] & 0xf) << 4;
break;
- case XGL_FMT_R4G4B4A4_UNORM:
- case XGL_FMT_R4G4B4A4_USCALED:
+ case VK_FMT_R4G4B4A4_UNORM:
+ case VK_FMT_R4G4B4A4_USCALED:
((uint16_t *) value)[0] = (color[0] & 0xf) << 0 |
(color[1] & 0xf) << 4 |
(color[2] & 0xf) << 8 |
(color[3] & 0xf) << 12;
break;
- case XGL_FMT_R5G6B5_UNORM:
- case XGL_FMT_R5G6B5_USCALED:
+ case VK_FMT_R5G6B5_UNORM:
+ case VK_FMT_R5G6B5_USCALED:
((uint16_t *) value)[0] = (color[0] & 0x1f) << 0 |
(color[1] & 0x3f) << 5 |
(color[2] & 0x1f) << 11;
break;
- case XGL_FMT_B5G6R5_UNORM:
+ case VK_FMT_B5G6R5_UNORM:
((uint16_t *) value)[0] = (color[2] & 0x1f) << 0 |
(color[1] & 0x3f) << 5 |
(color[0] & 0x1f) << 11;
break;
- case XGL_FMT_R5G5B5A1_UNORM:
- case XGL_FMT_R5G5B5A1_USCALED:
+ case VK_FMT_R5G5B5A1_UNORM:
+ case VK_FMT_R5G5B5A1_USCALED:
((uint16_t *) value)[0] = (color[0] & 0x1f) << 0 |
(color[1] & 0x1f) << 5 |
(color[2] & 0x1f) << 10 |
(color[3] & 0x1) << 15;
break;
- case XGL_FMT_R8_UNORM:
- case XGL_FMT_R8_SNORM:
- case XGL_FMT_R8_USCALED:
- case XGL_FMT_R8_SSCALED:
- case XGL_FMT_R8_UINT:
- case XGL_FMT_R8_SINT:
- case XGL_FMT_R8_SRGB:
+ case VK_FMT_R8_UNORM:
+ case VK_FMT_R8_SNORM:
+ case VK_FMT_R8_USCALED:
+ case VK_FMT_R8_SSCALED:
+ case VK_FMT_R8_UINT:
+ case VK_FMT_R8_SINT:
+ case VK_FMT_R8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[0];
break;
- case XGL_FMT_R8G8_UNORM:
- case XGL_FMT_R8G8_SNORM:
- case XGL_FMT_R8G8_USCALED:
- case XGL_FMT_R8G8_SSCALED:
- case XGL_FMT_R8G8_UINT:
- case XGL_FMT_R8G8_SINT:
- case XGL_FMT_R8G8_SRGB:
+ case VK_FMT_R8G8_UNORM:
+ case VK_FMT_R8G8_SNORM:
+ case VK_FMT_R8G8_USCALED:
+ case VK_FMT_R8G8_SSCALED:
+ case VK_FMT_R8G8_UINT:
+ case VK_FMT_R8G8_SINT:
+ case VK_FMT_R8G8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[0];
((uint8_t *) value)[1] = (uint8_t) color[1];
break;
- case XGL_FMT_R8G8B8A8_UNORM:
- case XGL_FMT_R8G8B8A8_SNORM:
- case XGL_FMT_R8G8B8A8_USCALED:
- case XGL_FMT_R8G8B8A8_SSCALED:
- case XGL_FMT_R8G8B8A8_UINT:
- case XGL_FMT_R8G8B8A8_SINT:
- case XGL_FMT_R8G8B8A8_SRGB:
+ case VK_FMT_R8G8B8A8_UNORM:
+ case VK_FMT_R8G8B8A8_SNORM:
+ case VK_FMT_R8G8B8A8_USCALED:
+ case VK_FMT_R8G8B8A8_SSCALED:
+ case VK_FMT_R8G8B8A8_UINT:
+ case VK_FMT_R8G8B8A8_SINT:
+ case VK_FMT_R8G8B8A8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[0];
((uint8_t *) value)[1] = (uint8_t) color[1];
((uint8_t *) value)[2] = (uint8_t) color[2];
((uint8_t *) value)[3] = (uint8_t) color[3];
break;
- case XGL_FMT_B8G8R8A8_UNORM:
- case XGL_FMT_B8G8R8A8_SRGB:
+ case VK_FMT_B8G8R8A8_UNORM:
+ case VK_FMT_B8G8R8A8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[2];
((uint8_t *) value)[1] = (uint8_t) color[1];
((uint8_t *) value)[2] = (uint8_t) color[0];
((uint8_t *) value)[3] = (uint8_t) color[3];
break;
- case XGL_FMT_R11G11B10_UFLOAT:
+ case VK_FMT_R11G11B10_UFLOAT:
((uint32_t *) value)[0] = (color[0] & 0x7ff) << 0 |
(color[1] & 0x7ff) << 11 |
(color[2] & 0x3ff) << 22;
break;
- case XGL_FMT_R10G10B10A2_UNORM:
- case XGL_FMT_R10G10B10A2_SNORM:
- case XGL_FMT_R10G10B10A2_USCALED:
- case XGL_FMT_R10G10B10A2_SSCALED:
- case XGL_FMT_R10G10B10A2_UINT:
- case XGL_FMT_R10G10B10A2_SINT:
+ case VK_FMT_R10G10B10A2_UNORM:
+ case VK_FMT_R10G10B10A2_SNORM:
+ case VK_FMT_R10G10B10A2_USCALED:
+ case VK_FMT_R10G10B10A2_SSCALED:
+ case VK_FMT_R10G10B10A2_UINT:
+ case VK_FMT_R10G10B10A2_SINT:
((uint32_t *) value)[0] = (color[0] & 0x3ff) << 0 |
(color[1] & 0x3ff) << 10 |
(color[2] & 0x3ff) << 20 |
(color[3] & 0x3) << 30;
break;
- case XGL_FMT_R16_UNORM:
- case XGL_FMT_R16_SNORM:
- case XGL_FMT_R16_USCALED:
- case XGL_FMT_R16_SSCALED:
- case XGL_FMT_R16_UINT:
- case XGL_FMT_R16_SINT:
- case XGL_FMT_R16_SFLOAT:
+ case VK_FMT_R16_UNORM:
+ case VK_FMT_R16_SNORM:
+ case VK_FMT_R16_USCALED:
+ case VK_FMT_R16_SSCALED:
+ case VK_FMT_R16_UINT:
+ case VK_FMT_R16_SINT:
+ case VK_FMT_R16_SFLOAT:
((uint16_t *) value)[0] = (uint16_t) color[0];
break;
- case XGL_FMT_R16G16_UNORM:
- case XGL_FMT_R16G16_SNORM:
- case XGL_FMT_R16G16_USCALED:
- case XGL_FMT_R16G16_SSCALED:
- case XGL_FMT_R16G16_UINT:
- case XGL_FMT_R16G16_SINT:
- case XGL_FMT_R16G16_SFLOAT:
+ case VK_FMT_R16G16_UNORM:
+ case VK_FMT_R16G16_SNORM:
+ case VK_FMT_R16G16_USCALED:
+ case VK_FMT_R16G16_SSCALED:
+ case VK_FMT_R16G16_UINT:
+ case VK_FMT_R16G16_SINT:
+ case VK_FMT_R16G16_SFLOAT:
((uint16_t *) value)[0] = (uint16_t) color[0];
((uint16_t *) value)[1] = (uint16_t) color[1];
break;
- case XGL_FMT_R16G16B16A16_UNORM:
- case XGL_FMT_R16G16B16A16_SNORM:
- case XGL_FMT_R16G16B16A16_USCALED:
- case XGL_FMT_R16G16B16A16_SSCALED:
- case XGL_FMT_R16G16B16A16_UINT:
- case XGL_FMT_R16G16B16A16_SINT:
- case XGL_FMT_R16G16B16A16_SFLOAT:
+ case VK_FMT_R16G16B16A16_UNORM:
+ case VK_FMT_R16G16B16A16_SNORM:
+ case VK_FMT_R16G16B16A16_USCALED:
+ case VK_FMT_R16G16B16A16_SSCALED:
+ case VK_FMT_R16G16B16A16_UINT:
+ case VK_FMT_R16G16B16A16_SINT:
+ case VK_FMT_R16G16B16A16_SFLOAT:
((uint16_t *) value)[0] = (uint16_t) color[0];
((uint16_t *) value)[1] = (uint16_t) color[1];
((uint16_t *) value)[2] = (uint16_t) color[2];
((uint16_t *) value)[3] = (uint16_t) color[3];
break;
- case XGL_FMT_R32_UINT:
- case XGL_FMT_R32_SINT:
- case XGL_FMT_R32_SFLOAT:
+ case VK_FMT_R32_UINT:
+ case VK_FMT_R32_SINT:
+ case VK_FMT_R32_SFLOAT:
((uint32_t *) value)[0] = color[0];
break;
- case XGL_FMT_R32G32_UINT:
- case XGL_FMT_R32G32_SINT:
- case XGL_FMT_R32G32_SFLOAT:
+ case VK_FMT_R32G32_UINT:
+ case VK_FMT_R32G32_SINT:
+ case VK_FMT_R32G32_SFLOAT:
((uint32_t *) value)[0] = color[0];
((uint32_t *) value)[1] = color[1];
break;
- case XGL_FMT_R32G32B32_UINT:
- case XGL_FMT_R32G32B32_SINT:
- case XGL_FMT_R32G32B32_SFLOAT:
+ case VK_FMT_R32G32B32_UINT:
+ case VK_FMT_R32G32B32_SINT:
+ case VK_FMT_R32G32B32_SFLOAT:
((uint32_t *) value)[0] = color[0];
((uint32_t *) value)[1] = color[1];
((uint32_t *) value)[2] = color[2];
break;
- case XGL_FMT_R32G32B32A32_UINT:
- case XGL_FMT_R32G32B32A32_SINT:
- case XGL_FMT_R32G32B32A32_SFLOAT:
+ case VK_FMT_R32G32B32A32_UINT:
+ case VK_FMT_R32G32B32A32_SINT:
+ case VK_FMT_R32G32B32A32_SFLOAT:
((uint32_t *) value)[0] = color[0];
((uint32_t *) value)[1] = color[1];
((uint32_t *) value)[2] = color[2];
((uint32_t *) value)[3] = color[3];
break;
- case XGL_FMT_D16_UNORM_S8_UINT:
+ case VK_FMT_D16_UNORM_S8_UINT:
((uint16_t *) value)[0] = (uint16_t) color[0];
((char *) value)[2] = (uint8_t) color[1];
break;
- case XGL_FMT_D32_SFLOAT_S8_UINT:
+ case VK_FMT_D32_SFLOAT_S8_UINT:
((uint32_t *) value)[0] = (uint32_t) color[0];
((char *) value)[4] = (uint8_t) color[1];
break;
- case XGL_FMT_R9G9B9E5_UFLOAT:
+ case VK_FMT_R9G9B9E5_UFLOAT:
((uint32_t *) value)[0] = (color[0] & 0x1ff) << 0 |
(color[1] & 0x1ff) << 9 |
(color[2] & 0x1ff) << 18 |
(color[3] & 0x1f) << 27;
break;
- case XGL_FMT_BC1_RGB_UNORM:
- case XGL_FMT_BC1_RGB_SRGB:
- case XGL_FMT_BC4_UNORM:
- case XGL_FMT_BC4_SNORM:
+ case VK_FMT_BC1_RGB_UNORM:
+ case VK_FMT_BC1_RGB_SRGB:
+ case VK_FMT_BC4_UNORM:
+ case VK_FMT_BC4_SNORM:
memcpy(value, color, 8);
break;
- case XGL_FMT_BC2_UNORM:
- case XGL_FMT_BC2_SRGB:
- case XGL_FMT_BC3_UNORM:
- case XGL_FMT_BC3_SRGB:
- case XGL_FMT_BC5_UNORM:
- case XGL_FMT_BC5_SNORM:
- case XGL_FMT_BC6H_UFLOAT:
- case XGL_FMT_BC6H_SFLOAT:
- case XGL_FMT_BC7_UNORM:
- case XGL_FMT_BC7_SRGB:
+ case VK_FMT_BC2_UNORM:
+ case VK_FMT_BC2_SRGB:
+ case VK_FMT_BC3_UNORM:
+ case VK_FMT_BC3_SRGB:
+ case VK_FMT_BC5_UNORM:
+ case VK_FMT_BC5_SNORM:
+ case VK_FMT_BC6H_UFLOAT:
+ case VK_FMT_BC6H_SFLOAT:
+ case VK_FMT_BC7_UNORM:
+ case VK_FMT_BC7_SRGB:
memcpy(value, color, 16);
break;
- case XGL_FMT_R8G8B8_UNORM:
- case XGL_FMT_R8G8B8_SNORM:
- case XGL_FMT_R8G8B8_USCALED:
- case XGL_FMT_R8G8B8_SSCALED:
- case XGL_FMT_R8G8B8_UINT:
- case XGL_FMT_R8G8B8_SINT:
- case XGL_FMT_R8G8B8_SRGB:
+ case VK_FMT_R8G8B8_UNORM:
+ case VK_FMT_R8G8B8_SNORM:
+ case VK_FMT_R8G8B8_USCALED:
+ case VK_FMT_R8G8B8_SSCALED:
+ case VK_FMT_R8G8B8_UINT:
+ case VK_FMT_R8G8B8_SINT:
+ case VK_FMT_R8G8B8_SRGB:
((uint8_t *) value)[0] = (uint8_t) color[0];
((uint8_t *) value)[1] = (uint8_t) color[1];
((uint8_t *) value)[2] = (uint8_t) color[2];
break;
- case XGL_FMT_R16G16B16_UNORM:
- case XGL_FMT_R16G16B16_SNORM:
- case XGL_FMT_R16G16B16_USCALED:
- case XGL_FMT_R16G16B16_SSCALED:
- case XGL_FMT_R16G16B16_UINT:
- case XGL_FMT_R16G16B16_SINT:
- case XGL_FMT_R16G16B16_SFLOAT:
+ case VK_FMT_R16G16B16_UNORM:
+ case VK_FMT_R16G16B16_SNORM:
+ case VK_FMT_R16G16B16_USCALED:
+ case VK_FMT_R16G16B16_SSCALED:
+ case VK_FMT_R16G16B16_UINT:
+ case VK_FMT_R16G16B16_SINT:
+ case VK_FMT_R16G16B16_SFLOAT:
((uint16_t *) value)[0] = (uint16_t) color[0];
((uint16_t *) value)[1] = (uint16_t) color[1];
((uint16_t *) value)[2] = (uint16_t) color[2];
break;
- case XGL_FMT_B10G10R10A2_UNORM:
- case XGL_FMT_B10G10R10A2_SNORM:
- case XGL_FMT_B10G10R10A2_USCALED:
- case XGL_FMT_B10G10R10A2_SSCALED:
- case XGL_FMT_B10G10R10A2_UINT:
- case XGL_FMT_B10G10R10A2_SINT:
+ case VK_FMT_B10G10R10A2_UNORM:
+ case VK_FMT_B10G10R10A2_SNORM:
+ case VK_FMT_B10G10R10A2_USCALED:
+ case VK_FMT_B10G10R10A2_SSCALED:
+ case VK_FMT_B10G10R10A2_UINT:
+ case VK_FMT_B10G10R10A2_SINT:
((uint32_t *) value)[0] = (color[2] & 0x3ff) << 0 |
(color[1] & 0x3ff) << 10 |
(color[0] & 0x3ff) << 20 |
(color[3] & 0x3) << 30;
break;
- case XGL_FMT_R64_SFLOAT:
+ case VK_FMT_R64_SFLOAT:
/* higher 32 bits always 0 */
((uint64_t *) value)[0] = color[0];
break;
- case XGL_FMT_R64G64_SFLOAT:
+ case VK_FMT_R64G64_SFLOAT:
((uint64_t *) value)[0] = color[0];
((uint64_t *) value)[1] = color[1];
break;
- case XGL_FMT_R64G64B64_SFLOAT:
+ case VK_FMT_R64G64B64_SFLOAT:
((uint64_t *) value)[0] = color[0];
((uint64_t *) value)[1] = color[1];
((uint64_t *) value)[2] = color[2];
break;
- case XGL_FMT_R64G64B64A64_SFLOAT:
+ case VK_FMT_R64G64B64A64_SFLOAT:
((uint64_t *) value)[0] = color[0];
((uint64_t *) value)[1] = color[1];
((uint64_t *) value)[2] = color[2];
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <stdbool.h>
#include "icd.h"
-static inline bool icd_format_is_undef(XGL_FORMAT format)
+static inline bool icd_format_is_undef(VK_FORMAT format)
{
- return (format == XGL_FMT_UNDEFINED);
+ return (format == VK_FMT_UNDEFINED);
}
-bool icd_format_is_ds(XGL_FORMAT format);
+bool icd_format_is_ds(VK_FORMAT format);
-static inline bool icd_format_is_color(XGL_FORMAT format)
+static inline bool icd_format_is_color(VK_FORMAT format)
{
return !(icd_format_is_undef(format) || icd_format_is_ds(format));
}
-bool icd_format_is_norm(XGL_FORMAT format);
+bool icd_format_is_norm(VK_FORMAT format);
-bool icd_format_is_int(XGL_FORMAT format);
+bool icd_format_is_int(VK_FORMAT format);
-bool icd_format_is_float(XGL_FORMAT format);
+bool icd_format_is_float(VK_FORMAT format);
-bool icd_format_is_srgb(XGL_FORMAT format);
+bool icd_format_is_srgb(VK_FORMAT format);
-bool icd_format_is_compressed(XGL_FORMAT format);
+bool icd_format_is_compressed(VK_FORMAT format);
-static inline int icd_format_get_block_width(XGL_FORMAT format)
+static inline int icd_format_get_block_width(VK_FORMAT format)
{
/* all compressed formats use 4x4 blocks */
return (icd_format_is_compressed(format)) ? 4 : 1;
}
-static inline bool icd_blend_mode_is_dual_src(XGL_BLEND mode)
+static inline bool icd_blend_mode_is_dual_src(VK_BLEND mode)
{
- return (mode == XGL_BLEND_SRC1_COLOR) ||
- (mode == XGL_BLEND_SRC1_ALPHA) ||
- (mode == XGL_BLEND_ONE_MINUS_SRC1_COLOR) ||
- (mode == XGL_BLEND_ONE_MINUS_SRC1_ALPHA);
+ return (mode == VK_BLEND_SRC1_COLOR) ||
+ (mode == VK_BLEND_SRC1_ALPHA) ||
+ (mode == VK_BLEND_ONE_MINUS_SRC1_COLOR) ||
+ (mode == VK_BLEND_ONE_MINUS_SRC1_ALPHA);
}
-static inline bool icd_pipeline_cb_att_needs_dual_source_blending(const XGL_PIPELINE_CB_ATTACHMENT_STATE *att)
+static inline bool icd_pipeline_cb_att_needs_dual_source_blending(const VK_PIPELINE_CB_ATTACHMENT_STATE *att)
{
if (icd_blend_mode_is_dual_src(att->srcBlendColor) ||
icd_blend_mode_is_dual_src(att->srcBlendAlpha) ||
return false;
}
-size_t icd_format_get_size(XGL_FORMAT format);
+size_t icd_format_get_size(VK_FORMAT format);
-XGL_IMAGE_FORMAT_CLASS icd_format_get_class(XGL_FORMAT format);
+VK_IMAGE_FORMAT_CLASS icd_format_get_class(VK_FORMAT format);
-unsigned int icd_format_get_channel_count(XGL_FORMAT format);
+unsigned int icd_format_get_channel_count(VK_FORMAT format);
-void icd_format_get_raw_value(XGL_FORMAT format,
+void icd_format_get_raw_value(VK_FORMAT format,
const uint32_t color[4],
void *value);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014-2015 LunarG, Inc.
*
#include <string.h>
#include "icd-instance.h"
-static void * XGLAPI default_alloc(void *user_data, size_t size,
+static void * VKAPI default_alloc(void *user_data, size_t size,
size_t alignment,
- XGL_SYSTEM_ALLOC_TYPE allocType)
+ VK_SYSTEM_ALLOC_TYPE allocType)
{
if (alignment <= 1) {
return malloc(size);
}
}
-static void XGLAPI default_free(void *user_data, void *ptr)
+static void VKAPI default_free(void *user_data, void *ptr)
{
free(ptr);
}
-struct icd_instance *icd_instance_create(const XGL_APPLICATION_INFO *app_info,
- const XGL_ALLOC_CALLBACKS *alloc_cb)
+struct icd_instance *icd_instance_create(const VK_APPLICATION_INFO *app_info,
+ const VK_ALLOC_CALLBACKS *alloc_cb)
{
- static const XGL_ALLOC_CALLBACKS default_alloc_cb = {
+ static const VK_ALLOC_CALLBACKS default_alloc_cb = {
.pfnAlloc = default_alloc,
.pfnFree = default_free,
};
alloc_cb = &default_alloc_cb;
instance = alloc_cb->pfnAlloc(alloc_cb->pUserData, sizeof(*instance), 0,
- XGL_SYSTEM_ALLOC_API_OBJECT);
+ VK_SYSTEM_ALLOC_API_OBJECT);
if (!instance)
return NULL;
name = (app_info->pAppName) ? app_info->pAppName : "unnamed";
len = strlen(name);
instance->name = alloc_cb->pfnAlloc(alloc_cb->pUserData, len + 1, 0,
- XGL_SYSTEM_ALLOC_INTERNAL);
+ VK_SYSTEM_ALLOC_INTERNAL);
if (!instance->name) {
alloc_cb->pfnFree(alloc_cb->pUserData, instance);
return NULL;
icd_instance_free(instance, instance);
}
-XGL_RESULT icd_instance_set_bool(struct icd_instance *instance,
- XGL_DBG_GLOBAL_OPTION option, bool yes)
+VK_RESULT icd_instance_set_bool(struct icd_instance *instance,
+ VK_DBG_GLOBAL_OPTION option, bool yes)
{
- XGL_RESULT res = XGL_SUCCESS;
+ VK_RESULT res = VK_SUCCESS;
switch (option) {
- case XGL_DBG_OPTION_DEBUG_ECHO_ENABLE:
+ case VK_DBG_OPTION_DEBUG_ECHO_ENABLE:
instance->debug_echo_enable = yes;
break;
- case XGL_DBG_OPTION_BREAK_ON_ERROR:
+ case VK_DBG_OPTION_BREAK_ON_ERROR:
instance->break_on_error = yes;
break;
- case XGL_DBG_OPTION_BREAK_ON_WARNING:
+ case VK_DBG_OPTION_BREAK_ON_WARNING:
instance->break_on_warning = yes;
break;
default:
- res = XGL_ERROR_INVALID_VALUE;
+ res = VK_ERROR_INVALID_VALUE;
break;
}
return res;
}
-XGL_RESULT icd_instance_add_logger(struct icd_instance *instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION func,
+VK_RESULT icd_instance_add_logger(struct icd_instance *instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION func,
void *user_data)
{
struct icd_instance_logger *logger;
if (!logger) {
logger = icd_instance_alloc(instance, sizeof(*logger), 0,
- XGL_SYSTEM_ALLOC_DEBUG);
+ VK_SYSTEM_ALLOC_DEBUG);
if (!logger)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
logger->func = func;
logger->next = instance->loggers;
logger->user_data = user_data;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_RESULT icd_instance_remove_logger(struct icd_instance *instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION func)
+VK_RESULT icd_instance_remove_logger(struct icd_instance *instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION func)
{
struct icd_instance_logger *logger, *prev;
}
if (!logger)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
if (prev)
prev->next = logger->next;
icd_instance_free(instance, logger);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void icd_instance_log(const struct icd_instance *instance,
- XGL_DBG_MSG_TYPE msg_type,
- XGL_VALIDATION_LEVEL validation_level,
- XGL_BASE_OBJECT src_object,
+ VK_DBG_MSG_TYPE msg_type,
+ VK_VALIDATION_LEVEL validation_level,
+ VK_BASE_OBJECT src_object,
size_t location, int32_t msg_code,
const char *msg)
{
}
switch (msg_type) {
- case XGL_DBG_MSG_ERROR:
+ case VK_DBG_MSG_ERROR:
if (instance->break_on_error)
abort();
/* fall through */
- case XGL_DBG_MSG_WARNING:
+ case VK_DBG_MSG_WARNING:
if (instance->break_on_warning)
abort();
break;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014-2015 LunarG, Inc.
*
#endif
struct icd_instance_logger {
- XGL_DBG_MSG_CALLBACK_FUNCTION func;
+ VK_DBG_MSG_CALLBACK_FUNCTION func;
void *user_data;
struct icd_instance_logger *next;
bool break_on_error;
bool break_on_warning;
- XGL_ALLOC_CALLBACKS alloc_cb;
+ VK_ALLOC_CALLBACKS alloc_cb;
struct icd_instance_logger *loggers;
};
-struct icd_instance *icd_instance_create(const XGL_APPLICATION_INFO *app_info,
- const XGL_ALLOC_CALLBACKS *alloc_cb);
+struct icd_instance *icd_instance_create(const VK_APPLICATION_INFO *app_info,
+ const VK_ALLOC_CALLBACKS *alloc_cb);
void icd_instance_destroy(struct icd_instance *instance);
-XGL_RESULT icd_instance_set_bool(struct icd_instance *instance,
- XGL_DBG_GLOBAL_OPTION option, bool yes);
+VK_RESULT icd_instance_set_bool(struct icd_instance *instance,
+ VK_DBG_GLOBAL_OPTION option, bool yes);
static inline void *icd_instance_alloc(const struct icd_instance *instance,
size_t size, size_t alignment,
- XGL_SYSTEM_ALLOC_TYPE type)
+ VK_SYSTEM_ALLOC_TYPE type)
{
return instance->alloc_cb.pfnAlloc(instance->alloc_cb.pUserData,
size, alignment, type);
instance->alloc_cb.pfnFree(instance->alloc_cb.pUserData, ptr);
}
-XGL_RESULT icd_instance_add_logger(struct icd_instance *instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION func,
+VK_RESULT icd_instance_add_logger(struct icd_instance *instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION func,
void *user_data);
-XGL_RESULT icd_instance_remove_logger(struct icd_instance *instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION func);
+VK_RESULT icd_instance_remove_logger(struct icd_instance *instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION func);
void icd_instance_log(const struct icd_instance *instance,
- XGL_DBG_MSG_TYPE msg_type,
- XGL_VALIDATION_LEVEL validation_level,
- XGL_BASE_OBJECT src_object,
+ VK_DBG_MSG_TYPE msg_type,
+ VK_VALIDATION_LEVEL validation_level,
+ VK_BASE_OBJECT src_object,
size_t location, int32_t msg_code,
const char *msg);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#ifndef ICD_H
#define ICD_H
-#include <xgl.h>
-#include <xglPlatform.h>
-#include <xglDbg.h>
+#include <vulkan.h>
+#include <vkPlatform.h>
+#include <vkDbg.h>
#if defined(PLATFORM_LINUX)
-#include <xglWsiX11Ext.h>
+#include <vkWsiX11Ext.h>
#else
-#include <xglWsiWinExt.h>
+#include <vkWsiWinExt.h>
#endif
# Create the nulldrv XGL DRI library
-set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES")
+set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES")
add_custom_command(OUTPUT nulldrv_gpa.c
COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py icd-get-proc-addr > nulldrv_gpa.c
-# Null XGL Driver
+# Null VK Driver
-This directory provides a null XGL driver
+This directory provides a null VK driver
;;;; Begin Copyright Notice ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-; XGL
+; VK
;
; Copyright (C) 2015 LunarG, Inc.
;
; The following is required on Windows, for exporting symbols from the DLL
-LIBRARY XGL_nulldrv
+LIBRARY VK_nulldrv
EXPORTS
- xglGetProcAddr
- xglCreateInstance
- xglEnumerateGpus
- xglDestroyInstance
+ vkGetProcAddr
+ vkCreateInstance
+ vkEnumerateGpus
+ vkDestroyInstance
xcbCreateWindow
xcbDestroyWindow
xcbGetMessage
/*
- * XGL null driver
+ * Vulkan null driver
*
* Copyright (C) 2015 LunarG, Inc.
*
// The null driver supports all WSI extenstions ... for now ...
static const char * const nulldrv_gpu_exts[NULLDRV_EXT_COUNT] = {
- [NULLDRV_EXT_WSI_X11] = "XGL_WSI_X11",
- [NULLDRV_EXT_WSI_WINDOWS] = "XGL_WSI_WINDOWS"
+ [NULLDRV_EXT_WSI_X11] = "VK_WSI_X11",
+ [NULLDRV_EXT_WSI_WINDOWS] = "VK_WSI_WINDOWS"
};
-static struct nulldrv_base *nulldrv_base(XGL_BASE_OBJECT base)
+static struct nulldrv_base *nulldrv_base(VK_BASE_OBJECT base)
{
return (struct nulldrv_base *) base;
}
-static XGL_RESULT nulldrv_base_get_info(struct nulldrv_base *base, int type,
+static VK_RESULT nulldrv_base_get_info(struct nulldrv_base *base, int type,
size_t *size, void *data)
{
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
size_t s;
uint32_t *count;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
- s = sizeof(XGL_MEMORY_REQUIREMENTS);
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
+ s = sizeof(VK_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
- mem_req->memType = XGL_MEMORY_TYPE_OTHER;
+ mem_req->memType = VK_MEMORY_TYPE_OTHER;
break;
}
- case XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT:
+ case VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT:
*size = sizeof(uint32_t);
if (data == NULL)
return ret;
count = (uint32_t *) data;
*count = 1;
break;
- case XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
- s = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ case VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
+ s = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
- s = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ s = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
*size = s;
if (data == NULL)
return ret;
memset(data, 0, s);
break;
default:
- ret = XGL_ERROR_INVALID_VALUE;
+ ret = VK_ERROR_INVALID_VALUE;
break;
}
static struct nulldrv_base *nulldrv_base_create(struct nulldrv_dev *dev,
size_t obj_size,
- XGL_DBG_OBJECT_TYPE type)
+ VK_DBG_OBJECT_TYPE type)
{
struct nulldrv_base *base;
return base;
}
-static XGL_RESULT nulldrv_gpu_add(int devid, const char *primary_node,
+static VK_RESULT nulldrv_gpu_add(int devid, const char *primary_node,
const char *render_node, struct nulldrv_gpu **gpu_ret)
{
struct nulldrv_gpu *gpu;
gpu = malloc(sizeof(*gpu));
if (!gpu)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(gpu, 0, sizeof(*gpu));
// Initialize pointer to loader's dispatch table with ICD_LOADER_MAGIC
*gpu_ret = gpu;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_queue_create(struct nulldrv_dev *dev,
+static VK_RESULT nulldrv_queue_create(struct nulldrv_dev *dev,
uint32_t node_index,
struct nulldrv_queue **queue_ret)
{
struct nulldrv_queue *queue;
queue = (struct nulldrv_queue *) nulldrv_base_create(dev, sizeof(*queue),
- XGL_DBG_OBJECT_QUEUE);
+ VK_DBG_OBJECT_QUEUE);
if (!queue)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
queue->dev = dev;
*queue_ret = queue;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT dev_create_queues(struct nulldrv_dev *dev,
- const XGL_DEVICE_QUEUE_CREATE_INFO *queues,
+static VK_RESULT dev_create_queues(struct nulldrv_dev *dev,
+ const VK_DEVICE_QUEUE_CREATE_INFO *queues,
uint32_t count)
{
uint32_t i;
if (!count)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
for (i = 0; i < count; i++) {
- const XGL_DEVICE_QUEUE_CREATE_INFO *q = &queues[i];
- XGL_RESULT ret = XGL_SUCCESS;
+ const VK_DEVICE_QUEUE_CREATE_INFO *q = &queues[i];
+ VK_RESULT ret = VK_SUCCESS;
if (q->queueCount == 1 && !dev->queues[q->queueNodeIndex]) {
ret = nulldrv_queue_create(dev, q->queueNodeIndex,
&dev->queues[q->queueNodeIndex]);
}
else {
- ret = XGL_ERROR_INVALID_POINTER;
+ ret = VK_ERROR_INVALID_POINTER;
}
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
return ret;
}
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
static enum nulldrv_ext_type nulldrv_gpu_lookup_extension(const struct nulldrv_gpu *gpu,
return type;
}
-static XGL_RESULT nulldrv_desc_ooxx_create(struct nulldrv_dev *dev,
+static VK_RESULT nulldrv_desc_ooxx_create(struct nulldrv_dev *dev,
struct nulldrv_desc_ooxx **ooxx_ret)
{
struct nulldrv_desc_ooxx *ooxx;
ooxx = malloc(sizeof(*ooxx));
if (!ooxx)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
memset(ooxx, 0, sizeof(*ooxx));
*ooxx_ret = ooxx;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_dev_create(struct nulldrv_gpu *gpu,
- const XGL_DEVICE_CREATE_INFO *info,
+static VK_RESULT nulldrv_dev_create(struct nulldrv_gpu *gpu,
+ const VK_DEVICE_CREATE_INFO *info,
struct nulldrv_dev **dev_ret)
{
struct nulldrv_dev *dev;
uint32_t i;
- XGL_RESULT ret;
+ VK_RESULT ret;
dev = (struct nulldrv_dev *) nulldrv_base_create(NULL, sizeof(*dev),
- XGL_DBG_OBJECT_DEVICE);
+ VK_DBG_OBJECT_DEVICE);
if (!dev)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
for (i = 0; i < info->extensionCount; i++) {
const enum nulldrv_ext_type ext = nulldrv_gpu_lookup_extension(gpu,
info->ppEnabledExtensionNames[i]);
if (ext == NULLDRV_EXT_INVALID)
- return XGL_ERROR_INVALID_EXTENSION;
+ return VK_ERROR_INVALID_EXTENSION;
dev->exts[ext] = true;
}
ret = nulldrv_desc_ooxx_create(dev, &dev->desc_ooxx);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
return ret;
}
ret = dev_create_queues(dev, info->pRequestedQueues,
info->queueRecordCount);
- if (ret != XGL_SUCCESS) {
+ if (ret != VK_SUCCESS) {
return ret;
}
*dev_ret = dev;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_gpu *nulldrv_gpu(XGL_PHYSICAL_GPU gpu)
+static struct nulldrv_gpu *nulldrv_gpu(VK_PHYSICAL_GPU gpu)
{
return (struct nulldrv_gpu *) gpu;
}
-static XGL_RESULT nulldrv_rt_view_create(struct nulldrv_dev *dev,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO *info,
+static VK_RESULT nulldrv_rt_view_create(struct nulldrv_dev *dev,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO *info,
struct nulldrv_rt_view **view_ret)
{
struct nulldrv_rt_view *view;
view = (struct nulldrv_rt_view *) nulldrv_base_create(dev, sizeof(*view),
- XGL_DBG_OBJECT_COLOR_TARGET_VIEW);
+ VK_DBG_OBJECT_COLOR_TARGET_VIEW);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_fence_create(struct nulldrv_dev *dev,
- const XGL_FENCE_CREATE_INFO *info,
+static VK_RESULT nulldrv_fence_create(struct nulldrv_dev *dev,
+ const VK_FENCE_CREATE_INFO *info,
struct nulldrv_fence **fence_ret)
{
struct nulldrv_fence *fence;
fence = (struct nulldrv_fence *) nulldrv_base_create(dev, sizeof(*fence),
- XGL_DBG_OBJECT_FENCE);
+ VK_DBG_OBJECT_FENCE);
if (!fence)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*fence_ret = fence;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_dev *nulldrv_dev(XGL_DEVICE dev)
+static struct nulldrv_dev *nulldrv_dev(VK_DEVICE dev)
{
return (struct nulldrv_dev *) dev;
}
}
-static XGL_RESULT img_get_info(struct nulldrv_base *base, int type,
+static VK_RESULT img_get_info(struct nulldrv_base *base, int type,
size_t *size, void *data)
{
struct nulldrv_img *img = nulldrv_img_from_base(base);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
mem_req->size = img->total_size;
mem_req->alignment = 4096;
- mem_req->memType = XGL_MEMORY_TYPE_IMAGE;
+ mem_req->memType = VK_MEMORY_TYPE_IMAGE;
}
break;
- case XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS:
{
- XGL_IMAGE_MEMORY_REQUIREMENTS *img_req = data;
+ VK_IMAGE_MEMORY_REQUIREMENTS *img_req = data;
- *size = sizeof(XGL_IMAGE_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_IMAGE_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
img_req->usage = img->usage;
img_req->samples = img->samples;
}
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
{
- XGL_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
+ VK_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
- *size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
buf_req->usage = img->usage;
return ret;
}
-static XGL_RESULT nulldrv_img_create(struct nulldrv_dev *dev,
- const XGL_IMAGE_CREATE_INFO *info,
+static VK_RESULT nulldrv_img_create(struct nulldrv_dev *dev,
+ const VK_IMAGE_CREATE_INFO *info,
bool scanout,
struct nulldrv_img **img_ret)
{
struct nulldrv_img *img;
img = (struct nulldrv_img *) nulldrv_base_create(dev, sizeof(*img),
- XGL_DBG_OBJECT_IMAGE);
+ VK_DBG_OBJECT_IMAGE);
if (!img)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
img->type = info->imageType;
img->depth = info->extent.depth;
img->mip_levels = info->mipLevels;
img->array_size = info->arraySize;
img->usage = info->usage;
- if (info->tiling == XGL_LINEAR_TILING)
- img->format_class = XGL_IMAGE_FORMAT_CLASS_LINEAR;
+ if (info->tiling == VK_LINEAR_TILING)
+ img->format_class = VK_IMAGE_FORMAT_CLASS_LINEAR;
else
img->format_class = icd_format_get_class(info->format);
img->samples = info->samples;
*img_ret = img;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_img *nulldrv_img(XGL_IMAGE image)
+static struct nulldrv_img *nulldrv_img(VK_IMAGE image)
{
return (struct nulldrv_img *) image;
}
-static XGL_RESULT nulldrv_mem_alloc(struct nulldrv_dev *dev,
- const XGL_MEMORY_ALLOC_INFO *info,
+static VK_RESULT nulldrv_mem_alloc(struct nulldrv_dev *dev,
+ const VK_MEMORY_ALLOC_INFO *info,
struct nulldrv_mem **mem_ret)
{
struct nulldrv_mem *mem;
mem = (struct nulldrv_mem *) nulldrv_base_create(dev, sizeof(*mem),
- XGL_DBG_OBJECT_GPU_MEMORY);
+ VK_DBG_OBJECT_GPU_MEMORY);
if (!mem)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
mem->bo = malloc(info->allocationSize);
if (!mem->bo) {
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
mem->size = info->allocationSize;
*mem_ret = mem;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_ds_view_create(struct nulldrv_dev *dev,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO *info,
+static VK_RESULT nulldrv_ds_view_create(struct nulldrv_dev *dev,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO *info,
struct nulldrv_ds_view **view_ret)
{
struct nulldrv_img *img = nulldrv_img(info->image);
struct nulldrv_ds_view *view;
view = (struct nulldrv_ds_view *) nulldrv_base_create(dev, sizeof(*view),
- XGL_DBG_OBJECT_DEPTH_STENCIL_VIEW);
+ VK_DBG_OBJECT_DEPTH_STENCIL_VIEW);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->img = img;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_sampler_create(struct nulldrv_dev *dev,
- const XGL_SAMPLER_CREATE_INFO *info,
+static VK_RESULT nulldrv_sampler_create(struct nulldrv_dev *dev,
+ const VK_SAMPLER_CREATE_INFO *info,
struct nulldrv_sampler **sampler_ret)
{
struct nulldrv_sampler *sampler;
sampler = (struct nulldrv_sampler *) nulldrv_base_create(dev,
- sizeof(*sampler), XGL_DBG_OBJECT_SAMPLER);
+ sizeof(*sampler), VK_DBG_OBJECT_SAMPLER);
if (!sampler)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*sampler_ret = sampler;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_img_view_create(struct nulldrv_dev *dev,
- const XGL_IMAGE_VIEW_CREATE_INFO *info,
+static VK_RESULT nulldrv_img_view_create(struct nulldrv_dev *dev,
+ const VK_IMAGE_VIEW_CREATE_INFO *info,
struct nulldrv_img_view **view_ret)
{
struct nulldrv_img *img = nulldrv_img(info->image);
struct nulldrv_img_view *view;
view = (struct nulldrv_img_view *) nulldrv_base_create(dev, sizeof(*view),
- XGL_DBG_OBJECT_IMAGE_VIEW);
+ VK_DBG_OBJECT_IMAGE_VIEW);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->img = img;
view->min_lod = info->minLod;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static void *nulldrv_mem_map(struct nulldrv_mem *mem, XGL_FLAGS flags)
+static void *nulldrv_mem_map(struct nulldrv_mem *mem, VK_FLAGS flags)
{
return mem->bo;
}
-static struct nulldrv_mem *nulldrv_mem(XGL_GPU_MEMORY mem)
+static struct nulldrv_mem *nulldrv_mem(VK_GPU_MEMORY mem)
{
return (struct nulldrv_mem *) mem;
}
return (struct nulldrv_buf *) base;
}
-static XGL_RESULT buf_get_info(struct nulldrv_base *base, int type,
+static VK_RESULT buf_get_info(struct nulldrv_base *base, int type,
size_t *size, void *data)
{
struct nulldrv_buf *buf = nulldrv_buf_from_base(base);
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (type) {
- case XGL_INFO_TYPE_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_MEMORY_REQUIREMENTS:
{
- XGL_MEMORY_REQUIREMENTS *mem_req = data;
+ VK_MEMORY_REQUIREMENTS *mem_req = data;
- *size = sizeof(XGL_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
mem_req->size = buf->size;
mem_req->alignment = 4096;
- mem_req->memType = XGL_MEMORY_TYPE_BUFFER;
+ mem_req->memType = VK_MEMORY_TYPE_BUFFER;
}
break;
- case XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
+ case VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS:
{
- XGL_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
+ VK_BUFFER_MEMORY_REQUIREMENTS *buf_req = data;
- *size = sizeof(XGL_BUFFER_MEMORY_REQUIREMENTS);
+ *size = sizeof(VK_BUFFER_MEMORY_REQUIREMENTS);
if (data == NULL)
return ret;
buf_req->usage = buf->usage;
return ret;
}
-static XGL_RESULT nulldrv_buf_create(struct nulldrv_dev *dev,
- const XGL_BUFFER_CREATE_INFO *info,
+static VK_RESULT nulldrv_buf_create(struct nulldrv_dev *dev,
+ const VK_BUFFER_CREATE_INFO *info,
struct nulldrv_buf **buf_ret)
{
struct nulldrv_buf *buf;
buf = (struct nulldrv_buf *) nulldrv_base_create(dev, sizeof(*buf),
- XGL_DBG_OBJECT_BUFFER);
+ VK_DBG_OBJECT_BUFFER);
if (!buf)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
buf->size = info->size;
buf->usage = info->usage;
*buf_ret = buf;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_desc_layout_create(struct nulldrv_dev *dev,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info,
+static VK_RESULT nulldrv_desc_layout_create(struct nulldrv_dev *dev,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO *info,
struct nulldrv_desc_layout **layout_ret)
{
struct nulldrv_desc_layout *layout;
layout = (struct nulldrv_desc_layout *)
nulldrv_base_create(dev, sizeof(*layout),
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT);
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT);
if (!layout)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*layout_ret = layout;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_desc_layout_chain_create(struct nulldrv_dev *dev,
+static VK_RESULT nulldrv_desc_layout_chain_create(struct nulldrv_dev *dev,
uint32_t setLayoutArrayCount,
- const XGL_DESCRIPTOR_SET_LAYOUT *pSetLayoutArray,
+ const VK_DESCRIPTOR_SET_LAYOUT *pSetLayoutArray,
struct nulldrv_desc_layout_chain **chain_ret)
{
struct nulldrv_desc_layout_chain *chain;
chain = (struct nulldrv_desc_layout_chain *)
nulldrv_base_create(dev, sizeof(*chain),
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN);
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN);
if (!chain)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*chain_ret = chain;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_desc_layout *nulldrv_desc_layout(XGL_DESCRIPTOR_SET_LAYOUT layout)
+static struct nulldrv_desc_layout *nulldrv_desc_layout(VK_DESCRIPTOR_SET_LAYOUT layout)
{
return (struct nulldrv_desc_layout *) layout;
}
-static XGL_RESULT shader_create(struct nulldrv_dev *dev,
- const XGL_SHADER_CREATE_INFO *info,
+static VK_RESULT shader_create(struct nulldrv_dev *dev,
+ const VK_SHADER_CREATE_INFO *info,
struct nulldrv_shader **sh_ret)
{
struct nulldrv_shader *sh;
sh = (struct nulldrv_shader *) nulldrv_base_create(dev, sizeof(*sh),
- XGL_DBG_OBJECT_SHADER);
+ VK_DBG_OBJECT_SHADER);
if (!sh)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*sh_ret = sh;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT graphics_pipeline_create(struct nulldrv_dev *dev,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO *info_,
+static VK_RESULT graphics_pipeline_create(struct nulldrv_dev *dev,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO *info_,
struct nulldrv_pipeline **pipeline_ret)
{
struct nulldrv_pipeline *pipeline;
pipeline = (struct nulldrv_pipeline *)
nulldrv_base_create(dev, sizeof(*pipeline),
- XGL_DBG_OBJECT_GRAPHICS_PIPELINE);
+ VK_DBG_OBJECT_GRAPHICS_PIPELINE);
if (!pipeline)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*pipeline_ret = pipeline;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_viewport_state_create(struct nulldrv_dev *dev,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO *info,
+static VK_RESULT nulldrv_viewport_state_create(struct nulldrv_dev *dev,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO *info,
struct nulldrv_dynamic_vp **state_ret)
{
struct nulldrv_dynamic_vp *state;
state = (struct nulldrv_dynamic_vp *) nulldrv_base_create(dev,
- sizeof(*state), XGL_DBG_OBJECT_VIEWPORT_STATE);
+ sizeof(*state), VK_DBG_OBJECT_VIEWPORT_STATE);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_raster_state_create(struct nulldrv_dev *dev,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO *info,
+static VK_RESULT nulldrv_raster_state_create(struct nulldrv_dev *dev,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO *info,
struct nulldrv_dynamic_rs **state_ret)
{
struct nulldrv_dynamic_rs *state;
state = (struct nulldrv_dynamic_rs *) nulldrv_base_create(dev,
- sizeof(*state), XGL_DBG_OBJECT_RASTER_STATE);
+ sizeof(*state), VK_DBG_OBJECT_RASTER_STATE);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_blend_state_create(struct nulldrv_dev *dev,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO *info,
+static VK_RESULT nulldrv_blend_state_create(struct nulldrv_dev *dev,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO *info,
struct nulldrv_dynamic_cb **state_ret)
{
struct nulldrv_dynamic_cb *state;
state = (struct nulldrv_dynamic_cb *) nulldrv_base_create(dev,
- sizeof(*state), XGL_DBG_OBJECT_COLOR_BLEND_STATE);
+ sizeof(*state), VK_DBG_OBJECT_COLOR_BLEND_STATE);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_ds_state_create(struct nulldrv_dev *dev,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO *info,
+static VK_RESULT nulldrv_ds_state_create(struct nulldrv_dev *dev,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO *info,
struct nulldrv_dynamic_ds **state_ret)
{
struct nulldrv_dynamic_ds *state;
state = (struct nulldrv_dynamic_ds *) nulldrv_base_create(dev,
- sizeof(*state), XGL_DBG_OBJECT_DEPTH_STENCIL_STATE);
+ sizeof(*state), VK_DBG_OBJECT_DEPTH_STENCIL_STATE);
if (!state)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*state_ret = state;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_cmd_create(struct nulldrv_dev *dev,
- const XGL_CMD_BUFFER_CREATE_INFO *info,
+static VK_RESULT nulldrv_cmd_create(struct nulldrv_dev *dev,
+ const VK_CMD_BUFFER_CREATE_INFO *info,
struct nulldrv_cmd **cmd_ret)
{
struct nulldrv_cmd *cmd;
cmd = (struct nulldrv_cmd *) nulldrv_base_create(dev, sizeof(*cmd),
- XGL_DBG_OBJECT_CMD_BUFFER);
+ VK_DBG_OBJECT_CMD_BUFFER);
if (!cmd)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*cmd_ret = cmd;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_desc_pool_create(struct nulldrv_dev *dev,
- XGL_DESCRIPTOR_POOL_USAGE usage,
+static VK_RESULT nulldrv_desc_pool_create(struct nulldrv_dev *dev,
+ VK_DESCRIPTOR_POOL_USAGE usage,
uint32_t max_sets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO *info,
+ const VK_DESCRIPTOR_POOL_CREATE_INFO *info,
struct nulldrv_desc_pool **pool_ret)
{
struct nulldrv_desc_pool *pool;
pool = (struct nulldrv_desc_pool *)
nulldrv_base_create(dev, sizeof(*pool),
- XGL_DBG_OBJECT_DESCRIPTOR_POOL);
+ VK_DBG_OBJECT_DESCRIPTOR_POOL);
if (!pool)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pool->dev = dev;
*pool_ret = pool;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_desc_set_create(struct nulldrv_dev *dev,
+static VK_RESULT nulldrv_desc_set_create(struct nulldrv_dev *dev,
struct nulldrv_desc_pool *pool,
- XGL_DESCRIPTOR_SET_USAGE usage,
+ VK_DESCRIPTOR_SET_USAGE usage,
const struct nulldrv_desc_layout *layout,
struct nulldrv_desc_set **set_ret)
{
set = (struct nulldrv_desc_set *)
nulldrv_base_create(dev, sizeof(*set),
- XGL_DBG_OBJECT_DESCRIPTOR_SET);
+ VK_DBG_OBJECT_DESCRIPTOR_SET);
if (!set)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
set->ooxx = dev->desc_ooxx;
set->layout = layout;
*set_ret = set;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_desc_pool *nulldrv_desc_pool(XGL_DESCRIPTOR_POOL pool)
+static struct nulldrv_desc_pool *nulldrv_desc_pool(VK_DESCRIPTOR_POOL pool)
{
return (struct nulldrv_desc_pool *) pool;
}
-static XGL_RESULT nulldrv_fb_create(struct nulldrv_dev *dev,
- const XGL_FRAMEBUFFER_CREATE_INFO* info,
+static VK_RESULT nulldrv_fb_create(struct nulldrv_dev *dev,
+ const VK_FRAMEBUFFER_CREATE_INFO* info,
struct nulldrv_framebuffer ** fb_ret)
{
struct nulldrv_framebuffer *fb;
fb = (struct nulldrv_framebuffer *) nulldrv_base_create(dev, sizeof(*fb),
- XGL_DBG_OBJECT_FRAMEBUFFER);
+ VK_DBG_OBJECT_FRAMEBUFFER);
if (!fb)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*fb_ret = fb;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static XGL_RESULT nulldrv_render_pass_create(struct nulldrv_dev *dev,
- const XGL_RENDER_PASS_CREATE_INFO* info,
+static VK_RESULT nulldrv_render_pass_create(struct nulldrv_dev *dev,
+ const VK_RENDER_PASS_CREATE_INFO* info,
struct nulldrv_render_pass** rp_ret)
{
struct nulldrv_render_pass *rp;
rp = (struct nulldrv_render_pass *) nulldrv_base_create(dev, sizeof(*rp),
- XGL_DBG_OBJECT_RENDER_PASS);
+ VK_DBG_OBJECT_RENDER_PASS);
if (!rp)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
*rp_ret = rp;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-static struct nulldrv_buf *nulldrv_buf(XGL_BUFFER buf)
+static struct nulldrv_buf *nulldrv_buf(VK_BUFFER buf)
{
return (struct nulldrv_buf *) buf;
}
-static XGL_RESULT nulldrv_buf_view_create(struct nulldrv_dev *dev,
- const XGL_BUFFER_VIEW_CREATE_INFO *info,
+static VK_RESULT nulldrv_buf_view_create(struct nulldrv_dev *dev,
+ const VK_BUFFER_VIEW_CREATE_INFO *info,
struct nulldrv_buf_view **view_ret)
{
struct nulldrv_buf *buf = nulldrv_buf(info->buffer);
struct nulldrv_buf_view *view;
view = (struct nulldrv_buf_view *) nulldrv_base_create(dev, sizeof(*view),
- XGL_DBG_OBJECT_BUFFER_VIEW);
+ VK_DBG_OBJECT_BUFFER_VIEW);
if (!view)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
view->buf = buf;
*view_ret = view;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
// Driver entry points
//*********************************************
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(
- XGL_DEVICE device,
- const XGL_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_BUFFER* pBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkCreateBuffer(
+ VK_DEVICE device,
+ const VK_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_BUFFER* pBuffer)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
return nulldrv_buf_create(dev, pCreateInfo, (struct nulldrv_buf **) pBuffer);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(
- XGL_DEVICE device,
- const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_CMD_BUFFER* pCmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(
+ VK_DEVICE device,
+ const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_CMD_BUFFER* pCmdBuffer)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_cmd **) pCmdBuffer);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_CMD_BUFFER_BEGIN_INFO *info)
+ICD_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_CMD_BUFFER_BEGIN_INFO *info)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT void XGLAPI xglCmdInitAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdInitAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
const uint32_t* pData)
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdLoadAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdLoadAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER srcBuffer,
- XGL_GPU_SIZE srcOffset)
+ VK_BUFFER srcBuffer,
+ VK_GPU_SIZE srcOffset)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdSaveAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+ICD_EXPORT void VKAPI vkCmdSaveAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset)
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDbgMarkerBegin(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDbgMarkerBegin(
+ VK_CMD_BUFFER cmdBuffer,
const char* pMarker)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDbgMarkerEnd(
- XGL_CMD_BUFFER cmdBuffer)
+ICD_EXPORT void VKAPI vkCmdDbgMarkerEnd(
+ VK_CMD_BUFFER cmdBuffer)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCopyBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_BUFFER destBuffer,
+ICD_EXPORT void VKAPI vkCmdCopyBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_COPY* pRegions)
+ const VK_BUFFER_COPY* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCopyImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdCopyImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_COPY* pRegions)
+ const VK_IMAGE_COPY* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBlitImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdBlitImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_BLIT* pRegions)
+ const VK_IMAGE_BLIT* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCopyBufferToImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdCopyBufferToImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions)
+ const VK_BUFFER_IMAGE_COPY* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCopyImageToBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
+ICD_EXPORT void VKAPI vkCmdCopyImageToBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions)
+ const VK_BUFFER_IMAGE_COPY* pRegions)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdCloneImageData(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout)
+ICD_EXPORT void VKAPI vkCmdCloneImageData(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdUpdateBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE dataSize,
+ICD_EXPORT void VKAPI vkCmdUpdateBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE dataSize,
const uint32_t* pData)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdFillBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE fillSize,
+ICD_EXPORT void VKAPI vkCmdFillBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE fillSize,
uint32_t data)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdClearColorImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR color,
+ICD_EXPORT void VKAPI vkCmdClearColorImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR color,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdClearDepthStencil(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
+ICD_EXPORT void VKAPI vkCmdClearDepthStencil(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
float depth,
uint32_t stencil,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdResolveImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ICD_EXPORT void VKAPI vkCmdResolveImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t rectCount,
- const XGL_IMAGE_RESOLVE* pRects)
+ const VK_IMAGE_RESOLVE* pRects)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBeginQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdBeginQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot,
- XGL_FLAGS flags)
+ VK_FLAGS flags)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdEndQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdEndQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdResetQueryPool(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT void VKAPI vkCmdResetQueryPool(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdSetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event_,
- XGL_PIPE_EVENT pipeEvent)
+ICD_EXPORT void VKAPI vkCmdSetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event_,
+ VK_PIPE_EVENT pipeEvent)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdResetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event_,
- XGL_PIPE_EVENT pipeEvent)
+ICD_EXPORT void VKAPI vkCmdResetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event_,
+ VK_PIPE_EVENT pipeEvent)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdWriteTimestamp(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_TIMESTAMP_TYPE timestampType,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset)
+ICD_EXPORT void VKAPI vkCmdWriteTimestamp(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_TIMESTAMP_TYPE timestampType,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindPipeline(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_PIPELINE pipeline)
+ICD_EXPORT void VKAPI vkCmdBindPipeline(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_PIPELINE pipeline)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindDynamicStateObject(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_STATE_BIND_POINT stateBindPoint,
- XGL_DYNAMIC_STATE_OBJECT state)
+ICD_EXPORT void VKAPI vkCmdBindDynamicStateObject(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_STATE_BIND_POINT stateBindPoint,
+ VK_DYNAMIC_STATE_OBJECT state)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindDescriptorSets(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
+ICD_EXPORT void VKAPI vkCmdBindDescriptorSets(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
uint32_t layoutChainSlot,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET* pDescriptorSets,
const uint32_t* pUserData)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindVertexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdBindVertexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t binding)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdBindIndexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
- XGL_INDEX_TYPE indexType)
+ICD_EXPORT void VKAPI vkCmdBindIndexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
+ VK_INDEX_TYPE indexType)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDraw(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDraw(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstVertex,
uint32_t vertexCount,
uint32_t firstInstance,
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndexed(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDrawIndexed(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstIndex,
uint32_t indexCount,
int32_t vertexOffset,
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdDrawIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+ICD_EXPORT void VKAPI vkCmdDrawIndexedIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDispatch(
- XGL_CMD_BUFFER cmdBuffer,
+ICD_EXPORT void VKAPI vkCmdDispatch(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t x,
uint32_t y,
uint32_t z)
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdDispatchIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset)
+ICD_EXPORT void VKAPI vkCmdDispatchIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdWaitEvents(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_EVENT_WAIT_INFO* pWaitInfo)
+ICD_EXPORT void VKAPI vkCmdWaitEvents(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_EVENT_WAIT_INFO* pWaitInfo)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdPipelineBarrier(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_PIPELINE_BARRIER* pBarrier)
+ICD_EXPORT void VKAPI vkCmdPipelineBarrier(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_PIPELINE_BARRIER* pBarrier)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDevice(
- XGL_PHYSICAL_GPU gpu_,
- const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDevice(
+ VK_PHYSICAL_GPU gpu_,
+ const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice)
{
NULLDRV_LOG_FUNC;
struct nulldrv_gpu *gpu = nulldrv_gpu(gpu_);
return nulldrv_dev_create(gpu, pCreateInfo, (struct nulldrv_dev**)pDevice);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(
- XGL_DEVICE device)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyDevice(
+ VK_DEVICE device)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(
+ VK_DEVICE device,
uint32_t queueNodeIndex,
uint32_t queueIndex,
- XGL_QUEUE* pQueue)
+ VK_QUEUE* pQueue)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
*pQueue = dev->queues[0];
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(
- XGL_DEVICE device)
+ICD_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(
+ VK_DEVICE device)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetValidationLevel(
- XGL_DEVICE device,
- XGL_VALIDATION_LEVEL validationLevel)
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetValidationLevel(
+ VK_DEVICE device,
+ VK_VALIDATION_LEVEL validationLevel)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetMessageFilter(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetMessageFilter(
+ VK_DEVICE device,
int32_t msgCode,
- XGL_DBG_MSG_FILTER filter)
+ VK_DBG_MSG_FILTER filter)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetDeviceOption(
- XGL_DEVICE device,
- XGL_DBG_DEVICE_OPTION dbgOption,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetDeviceOption(
+ VK_DEVICE device,
+ VK_DBG_DEVICE_OPTION dbgOption,
size_t dataSize,
const void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateEvent(
- XGL_DEVICE device,
- const XGL_EVENT_CREATE_INFO* pCreateInfo,
- XGL_EVENT* pEvent)
+ICD_EXPORT VK_RESULT VKAPI vkCreateEvent(
+ VK_DEVICE device,
+ const VK_EVENT_CREATE_INFO* pCreateInfo,
+ VK_EVENT* pEvent)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetEventStatus(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkGetEventStatus(
+ VK_EVENT event_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglSetEvent(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkSetEvent(
+ VK_EVENT event_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetEvent(
- XGL_EVENT event_)
+ICD_EXPORT VK_RESULT VKAPI vkResetEvent(
+ VK_EVENT event_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateFence(
- XGL_DEVICE device,
- const XGL_FENCE_CREATE_INFO* pCreateInfo,
- XGL_FENCE* pFence)
+ICD_EXPORT VK_RESULT VKAPI vkCreateFence(
+ VK_DEVICE device,
+ const VK_FENCE_CREATE_INFO* pCreateInfo,
+ VK_FENCE* pFence)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_fence **) pFence);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(
- XGL_FENCE fence_)
+ICD_EXPORT VK_RESULT VKAPI vkGetFenceStatus(
+ VK_FENCE fence_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglWaitForFences(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkWaitForFences(
+ VK_DEVICE device,
uint32_t fenceCount,
- const XGL_FENCE* pFences,
+ const VK_FENCE* pFences,
bool32_t waitAll,
uint64_t timeout)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(
- XGL_DEVICE device,
- XGL_FORMAT format,
- XGL_FORMAT_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetFormatInfo(
+ VK_DEVICE device,
+ VK_FORMAT format,
+ VK_FORMAT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetGpuInfo(
- XGL_PHYSICAL_GPU gpu_,
- XGL_PHYSICAL_GPU_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetGpuInfo(
+ VK_PHYSICAL_GPU gpu_,
+ VK_PHYSICAL_GPU_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(
- XGL_PHYSICAL_GPU gpu_,
+ICD_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(
+ VK_PHYSICAL_GPU gpu_,
const char* pExtName)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(
- XGL_PHYSICAL_GPU gpu0_,
- XGL_PHYSICAL_GPU gpu1_,
- XGL_GPU_COMPATIBILITY_INFO* pInfo)
+ICD_EXPORT VK_RESULT VKAPI vkGetMultiGpuCompatibility(
+ VK_PHYSICAL_GPU gpu0_,
+ VK_PHYSICAL_GPU gpu1_,
+ VK_GPU_COMPATIBILITY_INFO* pInfo)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(
- XGL_DEVICE device,
- const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenPeerImage(
+ VK_DEVICE device,
+ const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateImage(
- XGL_DEVICE device,
- const XGL_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage)
+ICD_EXPORT VK_RESULT VKAPI vkCreateImage(
+ VK_DEVICE device,
+ const VK_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_img **) pImage);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetImageSubresourceInfo(
- XGL_IMAGE image,
- const XGL_IMAGE_SUBRESOURCE* pSubresource,
- XGL_SUBRESOURCE_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetImageSubresourceInfo(
+ VK_IMAGE image,
+ const VK_IMAGE_SUBRESOURCE* pSubresource,
+ VK_SUBRESOURCE_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
switch (infoType) {
- case XGL_INFO_TYPE_SUBRESOURCE_LAYOUT:
+ case VK_INFO_TYPE_SUBRESOURCE_LAYOUT:
{
- XGL_SUBRESOURCE_LAYOUT *layout = (XGL_SUBRESOURCE_LAYOUT *) pData;
+ VK_SUBRESOURCE_LAYOUT *layout = (VK_SUBRESOURCE_LAYOUT *) pData;
- *pDataSize = sizeof(XGL_SUBRESOURCE_LAYOUT);
+ *pDataSize = sizeof(VK_SUBRESOURCE_LAYOUT);
if (pData == NULL)
return ret;
}
break;
default:
- ret = XGL_ERROR_INVALID_VALUE;
+ ret = VK_ERROR_INVALID_VALUE;
break;
}
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglAllocMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkAllocMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
return nulldrv_mem_alloc(dev, pAllocInfo, (struct nulldrv_mem **) pMem);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglFreeMemory(
- XGL_GPU_MEMORY mem_)
+ICD_EXPORT VK_RESULT VKAPI vkFreeMemory(
+ VK_GPU_MEMORY mem_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(
- XGL_GPU_MEMORY mem_,
- XGL_MEMORY_PRIORITY priority)
+ICD_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(
+ VK_GPU_MEMORY mem_,
+ VK_MEMORY_PRIORITY priority)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglMapMemory(
- XGL_GPU_MEMORY mem_,
- XGL_FLAGS flags,
+ICD_EXPORT VK_RESULT VKAPI vkMapMemory(
+ VK_GPU_MEMORY mem_,
+ VK_FLAGS flags,
void** ppData)
{
NULLDRV_LOG_FUNC;
*ppData = ptr;
- return (ptr) ? XGL_SUCCESS : XGL_ERROR_UNKNOWN;
+ return (ptr) ? VK_SUCCESS : VK_ERROR_UNKNOWN;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(
- XGL_GPU_MEMORY mem_)
+ICD_EXPORT VK_RESULT VKAPI vkUnmapMemory(
+ VK_GPU_MEMORY mem_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkPinSystemMemory(
+ VK_DEVICE device,
const void* pSysMem,
size_t memSize,
- XGL_GPU_MEMORY* pMem)
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(
- XGL_DEVICE device,
- const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem)
+ICD_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(
+ VK_DEVICE device,
+ const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
- const XGL_INSTANCE_CREATE_INFO* pCreateInfo,
- XGL_INSTANCE* pInstance)
+ICD_EXPORT VK_RESULT VKAPI vkCreateInstance(
+ const VK_INSTANCE_CREATE_INFO* pCreateInfo,
+ VK_INSTANCE* pInstance)
{
NULLDRV_LOG_FUNC;
struct nulldrv_instance *inst;
inst = (struct nulldrv_instance *) nulldrv_base_create(NULL, sizeof(*inst),
- XGL_DBG_OBJECT_INSTANCE);
+ VK_DBG_OBJECT_INSTANCE);
if (!inst)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
inst->obj.base.get_info = NULL;
- *pInstance = (XGL_INSTANCE*)inst;
+ *pInstance = (VK_INSTANCE*)inst;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
- XGL_INSTANCE pInstance)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyInstance(
+ VK_INSTANCE pInstance)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(
- XGL_INSTANCE instance,
+ICD_EXPORT VK_RESULT VKAPI vkEnumerateGpus(
+ VK_INSTANCE instance,
uint32_t maxGpus,
uint32_t* pGpuCount,
- XGL_PHYSICAL_GPU* pGpus)
+ VK_PHYSICAL_GPU* pGpus)
{
NULLDRV_LOG_FUNC;
- XGL_RESULT ret;
+ VK_RESULT ret;
struct nulldrv_gpu *gpu;
*pGpuCount = 1;
ret = nulldrv_gpu_add(0, 0, 0, &gpu);
- if (ret == XGL_SUCCESS)
- pGpus[0] = (XGL_PHYSICAL_GPU) gpu;
+ if (ret == VK_SUCCESS)
+ pGpus[0] = (VK_PHYSICAL_GPU) gpu;
return ret;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(
- XGL_PHYSICAL_GPU gpu,
+ICD_EXPORT VK_RESULT VKAPI vkEnumerateLayers(
+ VK_PHYSICAL_GPU gpu,
size_t maxLayerCount,
size_t maxStringSize,
size_t* pOutLayerCount,
void* pReserved)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(
- XGL_INSTANCE instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
+ICD_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(
+ VK_INSTANCE instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
void* pUserData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(
- XGL_INSTANCE instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+ICD_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(
+ VK_INSTANCE instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(
- XGL_INSTANCE instance,
- XGL_DBG_GLOBAL_OPTION dbgOption,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(
+ VK_INSTANCE instance,
+ VK_DBG_GLOBAL_OPTION dbgOption,
size_t dataSize,
const void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDestroyObject(
- XGL_OBJECT object)
+ICD_EXPORT VK_RESULT VKAPI vkDestroyObject(
+ VK_OBJECT object)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(
- XGL_BASE_OBJECT object,
- XGL_OBJECT_INFO_TYPE infoType,
+ICD_EXPORT VK_RESULT VKAPI vkGetObjectInfo(
+ VK_BASE_OBJECT object,
+ VK_OBJECT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData)
{
return base->get_info(base, infoType, pDataSize, pData);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(
- XGL_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkBindObjectMemory(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_MEMORY mem_,
- XGL_GPU_SIZE memOffset)
+ VK_GPU_MEMORY mem_,
+ VK_GPU_SIZE memOffset)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindObjectMemoryRange(
- XGL_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkBindObjectMemoryRange(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_SIZE rangeOffset,
- XGL_GPU_SIZE rangeSize,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset)
+ VK_GPU_SIZE rangeOffset,
+ VK_GPU_SIZE rangeSize,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBindImageMemoryRange(
- XGL_IMAGE image,
+ICD_EXPORT VK_RESULT VKAPI vkBindImageMemoryRange(
+ VK_IMAGE image,
uint32_t allocationIdx,
- const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset)
+ const VK_IMAGE_MEMORY_BIND_INFO* bindInfo,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglDbgSetObjectTag(
- XGL_BASE_OBJECT object,
+ICD_EXPORT VK_RESULT VKAPI vkDbgSetObjectTag(
+ VK_BASE_OBJECT object,
size_t tagSize,
const void* pTag)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_pipeline **) pPipeline);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_pipeline **) pPipeline);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(
- XGL_DEVICE device,
- const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline)
+ICD_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(
+ VK_DEVICE device,
+ const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglStorePipeline(
- XGL_PIPELINE pipeline,
+ICD_EXPORT VK_RESULT VKAPI vkStorePipeline(
+ VK_PIPELINE pipeline,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglLoadPipeline(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkLoadPipeline(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE* pPipeline)
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglLoadPipelineDerivative(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkLoadPipelineDerivative(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(
- XGL_DEVICE device,
- const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo,
- XGL_QUERY_POOL* pQueryPool)
+ICD_EXPORT VK_RESULT VKAPI vkCreateQueryPool(
+ VK_DEVICE device,
+ const VK_QUERY_POOL_CREATE_INFO* pCreateInfo,
+ VK_QUERY_POOL* pQueryPool)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglGetQueryPoolResults(
- XGL_QUERY_POOL queryPool,
+ICD_EXPORT VK_RESULT VKAPI vkGetQueryPoolResults(
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount,
size_t* pDataSize,
void* pData)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueAddMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem)
+ICD_EXPORT VK_RESULT VKAPI vkQueueAddMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueRemoveMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem)
+ICD_EXPORT VK_RESULT VKAPI vkQueueRemoveMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(
- XGL_QUEUE queue_)
+ICD_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(
+ VK_QUEUE queue_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(
- XGL_QUEUE queue_,
+ICD_EXPORT VK_RESULT VKAPI vkQueueSubmit(
+ VK_QUEUE queue_,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER* pCmdBuffers,
- XGL_FENCE fence_)
+ const VK_CMD_BUFFER* pCmdBuffers,
+ VK_FENCE fence_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglOpenSharedSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo,
- XGL_SEMAPHORE* pSemaphore)
+ICD_EXPORT VK_RESULT VKAPI vkOpenSharedSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_OPEN_INFO* pOpenInfo,
+ VK_SEMAPHORE* pSemaphore)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo,
- XGL_SEMAPHORE* pSemaphore)
+ICD_EXPORT VK_RESULT VKAPI vkCreateSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_CREATE_INFO* pCreateInfo,
+ VK_SEMAPHORE* pSemaphore)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueSignalSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore)
+ICD_EXPORT VK_RESULT VKAPI vkQueueSignalSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglQueueWaitSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore)
+ICD_EXPORT VK_RESULT VKAPI vkQueueWaitSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateSampler(
- XGL_DEVICE device,
- const XGL_SAMPLER_CREATE_INFO* pCreateInfo,
- XGL_SAMPLER* pSampler)
+ICD_EXPORT VK_RESULT VKAPI vkCreateSampler(
+ VK_DEVICE device,
+ const VK_SAMPLER_CREATE_INFO* pCreateInfo,
+ VK_SAMPLER* pSampler)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_sampler **) pSampler);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateShader(
- XGL_DEVICE device,
- const XGL_SHADER_CREATE_INFO* pCreateInfo,
- XGL_SHADER* pShader)
+ICD_EXPORT VK_RESULT VKAPI vkCreateShader(
+ VK_DEVICE device,
+ const VK_SHADER_CREATE_INFO* pCreateInfo,
+ VK_SHADER* pShader)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
return shader_create(dev, pCreateInfo, (struct nulldrv_shader **) pShader);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_dynamic_vp **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_dynamic_rs **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_dynamic_cb **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_dynamic_ds **) pState);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(
- XGL_DEVICE device,
- const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
- XGL_BUFFER_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateBufferView(
+ VK_DEVICE device,
+ const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
+ VK_BUFFER_VIEW* pView)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_buf_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateImageView(
- XGL_DEVICE device,
- const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
- XGL_IMAGE_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateImageView(
+ VK_DEVICE device,
+ const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
+ VK_IMAGE_VIEW* pView)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_img_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(
- XGL_DEVICE device,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
- XGL_COLOR_ATTACHMENT_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(
+ VK_DEVICE device,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
+ VK_COLOR_ATTACHMENT_VIEW* pView)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_rt_view **) pView);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(
- XGL_DEVICE device,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
- XGL_DEPTH_STENCIL_VIEW* pView)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(
+ VK_DEVICE device,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
+ VK_DEPTH_STENCIL_VIEW* pView)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(
- XGL_DEVICE device,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout(
+ VK_DEVICE device,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_desc_layout **) pSetLayout);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(
- XGL_DEVICE device,
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(
+ VK_DEVICE device,
uint32_t setLayoutArrayCount,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_desc_layout_chain **) pLayoutChain);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+ICD_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_CMD_BUFFER cmd_)
+ICD_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_CMD_BUFFER cmd_)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_POOL_USAGE poolUsage,
+ICD_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_POOL_USAGE poolUsage,
uint32_t maxSets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_POOL* pDescriptorPool)
+ const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_POOL* pDescriptorPool)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
(struct nulldrv_desc_pool **) pDescriptorPool);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(
- XGL_DESCRIPTOR_POOL descriptorPool)
+ICD_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(
+ VK_DESCRIPTOR_POOL descriptorPool)
{
NULLDRV_LOG_FUNC;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
- XGL_DESCRIPTOR_SET_USAGE setUsage,
+ICD_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
+ VK_DESCRIPTOR_SET_USAGE setUsage,
uint32_t count,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
- XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
+ VK_DESCRIPTOR_SET* pDescriptorSets,
uint32_t* pCount)
{
NULLDRV_LOG_FUNC;
struct nulldrv_desc_pool *pool = nulldrv_desc_pool(descriptorPool);
struct nulldrv_dev *dev = pool->dev;
- XGL_RESULT ret = XGL_SUCCESS;
+ VK_RESULT ret = VK_SUCCESS;
uint32_t i;
for (i = 0; i < count; i++) {
const struct nulldrv_desc_layout *layout =
- nulldrv_desc_layout((XGL_DESCRIPTOR_SET_LAYOUT) pSetLayouts[i]);
+ nulldrv_desc_layout((VK_DESCRIPTOR_SET_LAYOUT) pSetLayouts[i]);
ret = nulldrv_desc_set_create(dev, pool, setUsage, layout,
(struct nulldrv_desc_set **) &pDescriptorSets[i]);
- if (ret != XGL_SUCCESS)
+ if (ret != VK_SUCCESS)
break;
}
return ret;
}
-ICD_EXPORT void XGLAPI xglClearDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
+ICD_EXPORT void VKAPI vkClearDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets)
+ const VK_DESCRIPTOR_SET* pDescriptorSets)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglUpdateDescriptors(
- XGL_DESCRIPTOR_SET descriptorSet,
+ICD_EXPORT void VKAPI vkUpdateDescriptors(
+ VK_DESCRIPTOR_SET descriptorSet,
uint32_t updateCount,
const void** ppUpdateArray)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(
- XGL_DEVICE device,
- const XGL_FRAMEBUFFER_CREATE_INFO* info,
- XGL_FRAMEBUFFER* fb_ret)
+ICD_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(
+ VK_DEVICE device,
+ const VK_FRAMEBUFFER_CREATE_INFO* info,
+ VK_FRAMEBUFFER* fb_ret)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
}
-ICD_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(
- XGL_DEVICE device,
- const XGL_RENDER_PASS_CREATE_INFO* info,
- XGL_RENDER_PASS* rp_ret)
+ICD_EXPORT VK_RESULT VKAPI vkCreateRenderPass(
+ VK_DEVICE device,
+ const VK_RENDER_PASS_CREATE_INFO* info,
+ VK_RENDER_PASS* rp_ret)
{
NULLDRV_LOG_FUNC;
struct nulldrv_dev *dev = nulldrv_dev(device);
return nulldrv_render_pass_create(dev, info, (struct nulldrv_render_pass **) rp_ret);
}
-ICD_EXPORT void XGLAPI xglCmdBeginRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_RENDER_PASS_BEGIN* pRenderPassBegin)
+ICD_EXPORT void VKAPI vkCmdBeginRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_RENDER_PASS_BEGIN* pRenderPassBegin)
{
NULLDRV_LOG_FUNC;
}
-ICD_EXPORT void XGLAPI xglCmdEndRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_RENDER_PASS renderPass)
+ICD_EXPORT void VKAPI vkCmdEndRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_RENDER_PASS renderPass)
{
NULLDRV_LOG_FUNC;
}
return 0;
}
-ICD_EXPORT XGL_RESULT xcbQueuePresent(void *queue, void *image, void* fence)
+ICD_EXPORT VK_RESULT xcbQueuePresent(void *queue, void *image, void* fence)
{
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <string.h>
#include <assert.h>
-#include <xgl.h>
-#include <xglDbg.h>
-#include <xglIcd.h>
+#include <vulkan.h>
+#include <vkDbg.h>
+#include <vkIcd.h>
#if defined(PLATFORM_LINUX)
-#include <xglWsiX11Ext.h>
+#include <vkWsiX11Ext.h>
#else
-#include <xglWsiWinExt.h>
+#include <vkWsiWinExt.h>
#endif
#include "icd.h"
struct nulldrv_base {
void *loader_data;
uint32_t magic;
- XGL_RESULT (*get_info)(struct nulldrv_base *base, int type1,
+ VK_RESULT (*get_info)(struct nulldrv_base *base, int type1,
size_t *size, void *data);
};
struct nulldrv_img {
struct nulldrv_obj obj;
- XGL_IMAGE_TYPE type;
+ VK_IMAGE_TYPE type;
int32_t depth;
uint32_t mip_levels;
uint32_t array_size;
- XGL_FLAGS usage;
- XGL_IMAGE_FORMAT_CLASS format_class;
+ VK_FLAGS usage;
+ VK_IMAGE_FORMAT_CLASS format_class;
uint32_t samples;
size_t total_size;
};
struct nulldrv_mem {
struct nulldrv_base base;
struct nulldrv_bo *bo;
- XGL_GPU_SIZE size;
+ VK_GPU_SIZE size;
};
struct nulldrv_ds_view {
struct nulldrv_buf {
struct nulldrv_obj obj;
- XGL_GPU_SIZE size;
- XGL_FLAGS usage;
+ VK_GPU_SIZE size;
+ VK_FLAGS usage;
};
struct nulldrv_desc_layout {
//
-// File: xgl.h
+// File: vulkan.h
//
/*
** Copyright (c) 2014 The Khronos Group Inc.
** MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
*/
-#ifndef __XGL_H__
-#define __XGL_H__
+#ifndef __VULKAN_H__
+#define __VULKAN_H__
-#define XGL_MAKE_VERSION(major, minor, patch) \
+#define VK_MAKE_VERSION(major, minor, patch) \
((major << 22) | (minor << 12) | patch)
-#include "xglPlatform.h"
+#include "vkPlatform.h"
-// XGL API version supported by this file
-#define XGL_API_VERSION XGL_MAKE_VERSION(0, 67, 0)
+// VK API version supported by this file
+#define VK_API_VERSION VK_MAKE_VERSION(0, 67, 0)
#ifdef __cplusplus
extern "C"
/*
***************************************************************************************************
-* Core XGL API
+* Core VK API
***************************************************************************************************
*/
#ifdef __cplusplus
- #define XGL_DEFINE_HANDLE(_obj) struct _obj##_T {char _dummy;}; typedef _obj##_T* _obj;
- #define XGL_DEFINE_SUBCLASS_HANDLE(_obj, _base) struct _obj##_T : public _base##_T {}; typedef _obj##_T* _obj;
+ #define VK_DEFINE_HANDLE(_obj) struct _obj##_T {char _dummy;}; typedef _obj##_T* _obj;
+ #define VK_DEFINE_SUBCLASS_HANDLE(_obj, _base) struct _obj##_T : public _base##_T {}; typedef _obj##_T* _obj;
#else // __cplusplus
- #define XGL_DEFINE_HANDLE(_obj) typedef void* _obj;
- #define XGL_DEFINE_SUBCLASS_HANDLE(_obj, _base) typedef void* _obj;
+ #define VK_DEFINE_HANDLE(_obj) typedef void* _obj;
+ #define VK_DEFINE_SUBCLASS_HANDLE(_obj, _base) typedef void* _obj;
#endif // __cplusplus
-XGL_DEFINE_HANDLE(XGL_INSTANCE)
-XGL_DEFINE_HANDLE(XGL_PHYSICAL_GPU)
-XGL_DEFINE_HANDLE(XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DEVICE, XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_QUEUE, XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_GPU_MEMORY, XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_OBJECT, XGL_BASE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_BUFFER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_BUFFER_VIEW, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_IMAGE, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_IMAGE_VIEW, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_COLOR_ATTACHMENT_VIEW, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DEPTH_STENCIL_VIEW, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_SHADER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_PIPELINE, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_SAMPLER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DESCRIPTOR_SET, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DESCRIPTOR_SET_LAYOUT, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DESCRIPTOR_SET_LAYOUT_CHAIN, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DESCRIPTOR_POOL, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_STATE_OBJECT, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_VP_STATE_OBJECT, XGL_DYNAMIC_STATE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_RS_STATE_OBJECT, XGL_DYNAMIC_STATE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_CB_STATE_OBJECT, XGL_DYNAMIC_STATE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_DYNAMIC_DS_STATE_OBJECT, XGL_DYNAMIC_STATE_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_CMD_BUFFER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_FENCE, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_SEMAPHORE, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_EVENT, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_QUERY_POOL, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_FRAMEBUFFER, XGL_OBJECT)
-XGL_DEFINE_SUBCLASS_HANDLE(XGL_RENDER_PASS, XGL_OBJECT)
-
-#define XGL_MAX_PHYSICAL_GPUS 16
-#define XGL_MAX_PHYSICAL_GPU_NAME 256
-
-#define XGL_LOD_CLAMP_NONE MAX_FLOAT
-#define XGL_LAST_MIP_OR_SLICE 0xffffffff
-
-#define XGL_TRUE 1
-#define XGL_FALSE 0
-
-#define XGL_NULL_HANDLE 0
+VK_DEFINE_HANDLE(VK_INSTANCE)
+VK_DEFINE_HANDLE(VK_PHYSICAL_GPU)
+VK_DEFINE_HANDLE(VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DEVICE, VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_QUEUE, VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_GPU_MEMORY, VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_OBJECT, VK_BASE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_BUFFER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_BUFFER_VIEW, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_IMAGE, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_IMAGE_VIEW, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_COLOR_ATTACHMENT_VIEW, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DEPTH_STENCIL_VIEW, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_SHADER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_PIPELINE, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_SAMPLER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DESCRIPTOR_SET, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DESCRIPTOR_SET_LAYOUT, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DESCRIPTOR_SET_LAYOUT_CHAIN, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DESCRIPTOR_POOL, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_STATE_OBJECT, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_VP_STATE_OBJECT, VK_DYNAMIC_STATE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_RS_STATE_OBJECT, VK_DYNAMIC_STATE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_CB_STATE_OBJECT, VK_DYNAMIC_STATE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_DYNAMIC_DS_STATE_OBJECT, VK_DYNAMIC_STATE_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_CMD_BUFFER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_FENCE, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_SEMAPHORE, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_EVENT, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_QUERY_POOL, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_FRAMEBUFFER, VK_OBJECT)
+VK_DEFINE_SUBCLASS_HANDLE(VK_RENDER_PASS, VK_OBJECT)
+
+#define VK_MAX_PHYSICAL_GPUS 16
+#define VK_MAX_PHYSICAL_GPU_NAME 256
+
+#define VK_LOD_CLAMP_NONE MAX_FLOAT
+#define VK_LAST_MIP_OR_SLICE 0xffffffff
+
+#define VK_TRUE 1
+#define VK_FALSE 0
+
+#define VK_NULL_HANDLE 0
// This macro defines INT_MAX in enumerations to force compilers to use 32 bits
// to represent them. This may or may not be necessary on some compilers. The
// option to compile it out may allow compilers that warn about missing enumerants
// in switch statements to be silenced.
-#define XGL_MAX_ENUM(T) T##_MAX_ENUM = 0x7FFFFFFF
+#define VK_MAX_ENUM(T) T##_MAX_ENUM = 0x7FFFFFFF
// ------------------------------------------------------------------------------------------------
// Enumerations
-typedef enum _XGL_MEMORY_PRIORITY
-{
- XGL_MEMORY_PRIORITY_UNUSED = 0x0,
- XGL_MEMORY_PRIORITY_VERY_LOW = 0x1,
- XGL_MEMORY_PRIORITY_LOW = 0x2,
- XGL_MEMORY_PRIORITY_NORMAL = 0x3,
- XGL_MEMORY_PRIORITY_HIGH = 0x4,
- XGL_MEMORY_PRIORITY_VERY_HIGH = 0x5,
-
- XGL_MEMORY_PRIORITY_BEGIN_RANGE = XGL_MEMORY_PRIORITY_UNUSED,
- XGL_MEMORY_PRIORITY_END_RANGE = XGL_MEMORY_PRIORITY_VERY_HIGH,
- XGL_NUM_MEMORY_PRIORITY = (XGL_MEMORY_PRIORITY_END_RANGE - XGL_MEMORY_PRIORITY_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_MEMORY_PRIORITY)
-} XGL_MEMORY_PRIORITY;
-
-typedef enum _XGL_IMAGE_LAYOUT
-{
- XGL_IMAGE_LAYOUT_UNDEFINED = 0x00000000, // Implicit layout an image is when its contents are undefined due to various reasons (e.g. right after creation)
- XGL_IMAGE_LAYOUT_GENERAL = 0x00000001, // General layout when image can be used for any kind of access
- XGL_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL = 0x00000002, // Optimal layout when image is only used for color attachment read/write
- XGL_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL = 0x00000003, // Optimal layout when image is only used for depth/stencil attachment read/write
- XGL_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL = 0x00000004, // Optimal layout when image is used for read only depth/stencil attachment and shader access
- XGL_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL = 0x00000005, // Optimal layout when image is used for read only shader access
- XGL_IMAGE_LAYOUT_CLEAR_OPTIMAL = 0x00000006, // Optimal layout when image is used only for clear operations
- XGL_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL = 0x00000007, // Optimal layout when image is used only as source of transfer operations
- XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL = 0x00000008, // Optimal layout when image is used only as destination of transfer operations
-
- XGL_IMAGE_LAYOUT_BEGIN_RANGE = XGL_IMAGE_LAYOUT_UNDEFINED,
- XGL_IMAGE_LAYOUT_END_RANGE = XGL_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
- XGL_NUM_IMAGE_LAYOUT = (XGL_IMAGE_LAYOUT_END_RANGE - XGL_IMAGE_LAYOUT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_LAYOUT)
-} XGL_IMAGE_LAYOUT;
-
-typedef enum _XGL_PIPE_EVENT
-{
- XGL_PIPE_EVENT_TOP_OF_PIPE = 0x00000001, // Set event before the GPU starts processing subsequent command
- XGL_PIPE_EVENT_VERTEX_PROCESSING_COMPLETE = 0x00000002, // Set event when all pending vertex processing is complete
- XGL_PIPE_EVENT_LOCAL_FRAGMENT_PROCESSING_COMPLETE = 0x00000003, // Set event when all pending fragment shader executions are complete, within each fragment location
- XGL_PIPE_EVENT_FRAGMENT_PROCESSING_COMPLETE = 0x00000004, // Set event when all pending fragment shader executions are complete
- XGL_PIPE_EVENT_GRAPHICS_PIPELINE_COMPLETE = 0x00000005, // Set event when all pending graphics operations are complete
- XGL_PIPE_EVENT_COMPUTE_PIPELINE_COMPLETE = 0x00000006, // Set event when all pending compute operations are complete
- XGL_PIPE_EVENT_TRANSFER_COMPLETE = 0x00000007, // Set event when all pending transfer operations are complete
- XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE = 0x00000008, // Set event when all pending GPU work is complete
-
- XGL_PIPE_EVENT_BEGIN_RANGE = XGL_PIPE_EVENT_TOP_OF_PIPE,
- XGL_PIPE_EVENT_END_RANGE = XGL_PIPE_EVENT_GPU_COMMANDS_COMPLETE,
- XGL_NUM_PIPE_EVENT = (XGL_PIPE_EVENT_END_RANGE - XGL_PIPE_EVENT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PIPE_EVENT)
-} XGL_PIPE_EVENT;
-
-typedef enum _XGL_WAIT_EVENT
-{
- XGL_WAIT_EVENT_TOP_OF_PIPE = 0x00000001, // Wait event before the GPU starts processing subsequent commands
- XGL_WAIT_EVENT_BEFORE_RASTERIZATION = 0x00000002, // Wait event before rasterizing subsequent primitives
-
- XGL_WAIT_EVENT_BEGIN_RANGE = XGL_WAIT_EVENT_TOP_OF_PIPE,
- XGL_WAIT_EVENT_END_RANGE = XGL_WAIT_EVENT_BEFORE_RASTERIZATION,
- XGL_NUM_WAIT_EVENT = (XGL_WAIT_EVENT_END_RANGE - XGL_WAIT_EVENT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_WAIT_EVENT)
-} XGL_WAIT_EVENT;
-
-typedef enum _XGL_MEMORY_OUTPUT_FLAGS
-{
- XGL_MEMORY_OUTPUT_CPU_WRITE_BIT = 0x00000001, // Controls output coherency of CPU writes
- XGL_MEMORY_OUTPUT_SHADER_WRITE_BIT = 0x00000002, // Controls output coherency of generic shader writes
- XGL_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT = 0x00000004, // Controls output coherency of color attachment writes
- XGL_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000008, // Controls output coherency of depth/stencil attachment writes
- XGL_MEMORY_OUTPUT_COPY_BIT = 0x00000010, // Controls output coherency of copy operations
- XGL_MAX_ENUM(_XGL_MEMORY_OUTPUT_FLAGS)
-} XGL_MEMORY_OUTPUT_FLAGS;
-
-typedef enum _XGL_MEMORY_INPUT_FLAGS
-{
- XGL_MEMORY_INPUT_CPU_READ_BIT = 0x00000001, // Controls input coherency of CPU reads
- XGL_MEMORY_INPUT_INDIRECT_COMMAND_BIT = 0x00000002, // Controls input coherency of indirect command reads
- XGL_MEMORY_INPUT_INDEX_FETCH_BIT = 0x00000004, // Controls input coherency of index fetches
- XGL_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT = 0x00000008, // Controls input coherency of vertex attribute fetches
- XGL_MEMORY_INPUT_UNIFORM_READ_BIT = 0x00000010, // Controls input coherency of uniform buffer reads
- XGL_MEMORY_INPUT_SHADER_READ_BIT = 0x00000020, // Controls input coherency of generic shader reads
- XGL_MEMORY_INPUT_COLOR_ATTACHMENT_BIT = 0x00000040, // Controls input coherency of color attachment reads
- XGL_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000080, // Controls input coherency of depth/stencil attachment reads
- XGL_MEMORY_INPUT_COPY_BIT = 0x00000100, // Controls input coherency of copy operations
- XGL_MAX_ENUM(_XGL_MEMORY_INPUT_FLAGS)
-} XGL_MEMORY_INPUT_FLAGS;
-
-typedef enum _XGL_ATTACHMENT_LOAD_OP
-{
- XGL_ATTACHMENT_LOAD_OP_LOAD = 0x00000000,
- XGL_ATTACHMENT_LOAD_OP_CLEAR = 0x00000001,
- XGL_ATTACHMENT_LOAD_OP_DONT_CARE = 0x00000002,
-
- XGL_ATTACHMENT_LOAD_OP_BEGIN_RANGE = XGL_ATTACHMENT_LOAD_OP_LOAD,
- XGL_ATTACHMENT_LOAD_OP_END_RANGE = XGL_ATTACHMENT_LOAD_OP_DONT_CARE,
- XGL_NUM_ATTACHMENT_LOAD_OP = (XGL_ATTACHMENT_LOAD_OP_END_RANGE - XGL_ATTACHMENT_LOAD_OP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_ATTACHMENT_LOAD_OP)
-} XGL_ATTACHMENT_LOAD_OP;
-
-typedef enum _XGL_ATTACHMENT_STORE_OP
-{
- XGL_ATTACHMENT_STORE_OP_STORE = 0x00000000,
- XGL_ATTACHMENT_STORE_OP_RESOLVE_MSAA = 0x00000001,
- XGL_ATTACHMENT_STORE_OP_DONT_CARE = 0x00000002,
-
- XGL_ATTACHMENT_STORE_OP_BEGIN_RANGE = XGL_ATTACHMENT_STORE_OP_STORE,
- XGL_ATTACHMENT_STORE_OP_END_RANGE = XGL_ATTACHMENT_STORE_OP_DONT_CARE,
- XGL_NUM_ATTACHMENT_STORE_OP = (XGL_ATTACHMENT_STORE_OP_END_RANGE - XGL_ATTACHMENT_STORE_OP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_ATTACHMENT_STORE_OP)
-} XGL_ATTACHMENT_STORE_OP;
-
-typedef enum _XGL_IMAGE_TYPE
-{
- XGL_IMAGE_1D = 0x00000000,
- XGL_IMAGE_2D = 0x00000001,
- XGL_IMAGE_3D = 0x00000002,
-
- XGL_IMAGE_TYPE_BEGIN_RANGE = XGL_IMAGE_1D,
- XGL_IMAGE_TYPE_END_RANGE = XGL_IMAGE_3D,
- XGL_NUM_IMAGE_TYPE = (XGL_IMAGE_TYPE_END_RANGE - XGL_IMAGE_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_TYPE)
-} XGL_IMAGE_TYPE;
-
-typedef enum _XGL_IMAGE_TILING
-{
- XGL_LINEAR_TILING = 0x00000000,
- XGL_OPTIMAL_TILING = 0x00000001,
-
- XGL_IMAGE_TILING_BEGIN_RANGE = XGL_LINEAR_TILING,
- XGL_IMAGE_TILING_END_RANGE = XGL_OPTIMAL_TILING,
- XGL_NUM_IMAGE_TILING = (XGL_IMAGE_TILING_END_RANGE - XGL_IMAGE_TILING_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_TILING)
-} XGL_IMAGE_TILING;
-
-typedef enum _XGL_IMAGE_VIEW_TYPE
-{
- XGL_IMAGE_VIEW_1D = 0x00000000,
- XGL_IMAGE_VIEW_2D = 0x00000001,
- XGL_IMAGE_VIEW_3D = 0x00000002,
- XGL_IMAGE_VIEW_CUBE = 0x00000003,
-
- XGL_IMAGE_VIEW_TYPE_BEGIN_RANGE = XGL_IMAGE_VIEW_1D,
- XGL_IMAGE_VIEW_TYPE_END_RANGE = XGL_IMAGE_VIEW_CUBE,
- XGL_NUM_IMAGE_VIEW_TYPE = (XGL_IMAGE_VIEW_TYPE_END_RANGE - XGL_IMAGE_VIEW_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_VIEW_TYPE)
-} XGL_IMAGE_VIEW_TYPE;
-
-typedef enum _XGL_IMAGE_ASPECT
-{
- XGL_IMAGE_ASPECT_COLOR = 0x00000000,
- XGL_IMAGE_ASPECT_DEPTH = 0x00000001,
- XGL_IMAGE_ASPECT_STENCIL = 0x00000002,
-
- XGL_IMAGE_ASPECT_BEGIN_RANGE = XGL_IMAGE_ASPECT_COLOR,
- XGL_IMAGE_ASPECT_END_RANGE = XGL_IMAGE_ASPECT_STENCIL,
- XGL_NUM_IMAGE_ASPECT = (XGL_IMAGE_ASPECT_END_RANGE - XGL_IMAGE_ASPECT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_ASPECT)
-} XGL_IMAGE_ASPECT;
-
-typedef enum _XGL_CHANNEL_SWIZZLE
-{
- XGL_CHANNEL_SWIZZLE_ZERO = 0x00000000,
- XGL_CHANNEL_SWIZZLE_ONE = 0x00000001,
- XGL_CHANNEL_SWIZZLE_R = 0x00000002,
- XGL_CHANNEL_SWIZZLE_G = 0x00000003,
- XGL_CHANNEL_SWIZZLE_B = 0x00000004,
- XGL_CHANNEL_SWIZZLE_A = 0x00000005,
-
- XGL_CHANNEL_SWIZZLE_BEGIN_RANGE = XGL_CHANNEL_SWIZZLE_ZERO,
- XGL_CHANNEL_SWIZZLE_END_RANGE = XGL_CHANNEL_SWIZZLE_A,
- XGL_NUM_CHANNEL_SWIZZLE = (XGL_CHANNEL_SWIZZLE_END_RANGE - XGL_CHANNEL_SWIZZLE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_CHANNEL_SWIZZLE)
-} XGL_CHANNEL_SWIZZLE;
-
-typedef enum _XGL_DESCRIPTOR_TYPE
-{
- XGL_DESCRIPTOR_TYPE_SAMPLER = 0x00000000,
- XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE = 0x00000001,
- XGL_DESCRIPTOR_TYPE_TEXTURE = 0x00000002,
- XGL_DESCRIPTOR_TYPE_TEXTURE_BUFFER = 0x00000003,
- XGL_DESCRIPTOR_TYPE_IMAGE = 0x00000004,
- XGL_DESCRIPTOR_TYPE_IMAGE_BUFFER = 0x00000005,
- XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 0x00000006,
- XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER = 0x00000007,
- XGL_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC = 0x00000008,
- XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC = 0x00000009,
+typedef enum _VK_MEMORY_PRIORITY
+{
+ VK_MEMORY_PRIORITY_UNUSED = 0x0,
+ VK_MEMORY_PRIORITY_VERY_LOW = 0x1,
+ VK_MEMORY_PRIORITY_LOW = 0x2,
+ VK_MEMORY_PRIORITY_NORMAL = 0x3,
+ VK_MEMORY_PRIORITY_HIGH = 0x4,
+ VK_MEMORY_PRIORITY_VERY_HIGH = 0x5,
+
+ VK_MEMORY_PRIORITY_BEGIN_RANGE = VK_MEMORY_PRIORITY_UNUSED,
+ VK_MEMORY_PRIORITY_END_RANGE = VK_MEMORY_PRIORITY_VERY_HIGH,
+ VK_NUM_MEMORY_PRIORITY = (VK_MEMORY_PRIORITY_END_RANGE - VK_MEMORY_PRIORITY_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_MEMORY_PRIORITY)
+} VK_MEMORY_PRIORITY;
+
+typedef enum _VK_IMAGE_LAYOUT
+{
+ VK_IMAGE_LAYOUT_UNDEFINED = 0x00000000, // Implicit layout an image is when its contents are undefined due to various reasons (e.g. right after creation)
+ VK_IMAGE_LAYOUT_GENERAL = 0x00000001, // General layout when image can be used for any kind of access
+ VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL = 0x00000002, // Optimal layout when image is only used for color attachment read/write
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL = 0x00000003, // Optimal layout when image is only used for depth/stencil attachment read/write
+ VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL = 0x00000004, // Optimal layout when image is used for read only depth/stencil attachment and shader access
+ VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL = 0x00000005, // Optimal layout when image is used for read only shader access
+ VK_IMAGE_LAYOUT_CLEAR_OPTIMAL = 0x00000006, // Optimal layout when image is used only for clear operations
+ VK_IMAGE_LAYOUT_TRANSFER_SOURCE_OPTIMAL = 0x00000007, // Optimal layout when image is used only as source of transfer operations
+ VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL = 0x00000008, // Optimal layout when image is used only as destination of transfer operations
+
+ VK_IMAGE_LAYOUT_BEGIN_RANGE = VK_IMAGE_LAYOUT_UNDEFINED,
+ VK_IMAGE_LAYOUT_END_RANGE = VK_IMAGE_LAYOUT_TRANSFER_DESTINATION_OPTIMAL,
+ VK_NUM_IMAGE_LAYOUT = (VK_IMAGE_LAYOUT_END_RANGE - VK_IMAGE_LAYOUT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_LAYOUT)
+} VK_IMAGE_LAYOUT;
+
+typedef enum _VK_PIPE_EVENT
+{
+ VK_PIPE_EVENT_TOP_OF_PIPE = 0x00000001, // Set event before the GPU starts processing subsequent command
+ VK_PIPE_EVENT_VERTEX_PROCESSING_COMPLETE = 0x00000002, // Set event when all pending vertex processing is complete
+ VK_PIPE_EVENT_LOCAL_FRAGMENT_PROCESSING_COMPLETE = 0x00000003, // Set event when all pending fragment shader executions are complete, within each fragment location
+ VK_PIPE_EVENT_FRAGMENT_PROCESSING_COMPLETE = 0x00000004, // Set event when all pending fragment shader executions are complete
+ VK_PIPE_EVENT_GRAPHICS_PIPELINE_COMPLETE = 0x00000005, // Set event when all pending graphics operations are complete
+ VK_PIPE_EVENT_COMPUTE_PIPELINE_COMPLETE = 0x00000006, // Set event when all pending compute operations are complete
+ VK_PIPE_EVENT_TRANSFER_COMPLETE = 0x00000007, // Set event when all pending transfer operations are complete
+ VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE = 0x00000008, // Set event when all pending GPU work is complete
+
+ VK_PIPE_EVENT_BEGIN_RANGE = VK_PIPE_EVENT_TOP_OF_PIPE,
+ VK_PIPE_EVENT_END_RANGE = VK_PIPE_EVENT_GPU_COMMANDS_COMPLETE,
+ VK_NUM_PIPE_EVENT = (VK_PIPE_EVENT_END_RANGE - VK_PIPE_EVENT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PIPE_EVENT)
+} VK_PIPE_EVENT;
+
+typedef enum _VK_WAIT_EVENT
+{
+ VK_WAIT_EVENT_TOP_OF_PIPE = 0x00000001, // Wait event before the GPU starts processing subsequent commands
+ VK_WAIT_EVENT_BEFORE_RASTERIZATION = 0x00000002, // Wait event before rasterizing subsequent primitives
+
+ VK_WAIT_EVENT_BEGIN_RANGE = VK_WAIT_EVENT_TOP_OF_PIPE,
+ VK_WAIT_EVENT_END_RANGE = VK_WAIT_EVENT_BEFORE_RASTERIZATION,
+ VK_NUM_WAIT_EVENT = (VK_WAIT_EVENT_END_RANGE - VK_WAIT_EVENT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_WAIT_EVENT)
+} VK_WAIT_EVENT;
+
+typedef enum _VK_MEMORY_OUTPUT_FLAGS
+{
+ VK_MEMORY_OUTPUT_CPU_WRITE_BIT = 0x00000001, // Controls output coherency of CPU writes
+ VK_MEMORY_OUTPUT_SHADER_WRITE_BIT = 0x00000002, // Controls output coherency of generic shader writes
+ VK_MEMORY_OUTPUT_COLOR_ATTACHMENT_BIT = 0x00000004, // Controls output coherency of color attachment writes
+ VK_MEMORY_OUTPUT_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000008, // Controls output coherency of depth/stencil attachment writes
+ VK_MEMORY_OUTPUT_COPY_BIT = 0x00000010, // Controls output coherency of copy operations
+ VK_MAX_ENUM(_VK_MEMORY_OUTPUT_FLAGS)
+} VK_MEMORY_OUTPUT_FLAGS;
+
+typedef enum _VK_MEMORY_INPUT_FLAGS
+{
+ VK_MEMORY_INPUT_CPU_READ_BIT = 0x00000001, // Controls input coherency of CPU reads
+ VK_MEMORY_INPUT_INDIRECT_COMMAND_BIT = 0x00000002, // Controls input coherency of indirect command reads
+ VK_MEMORY_INPUT_INDEX_FETCH_BIT = 0x00000004, // Controls input coherency of index fetches
+ VK_MEMORY_INPUT_VERTEX_ATTRIBUTE_FETCH_BIT = 0x00000008, // Controls input coherency of vertex attribute fetches
+ VK_MEMORY_INPUT_UNIFORM_READ_BIT = 0x00000010, // Controls input coherency of uniform buffer reads
+ VK_MEMORY_INPUT_SHADER_READ_BIT = 0x00000020, // Controls input coherency of generic shader reads
+ VK_MEMORY_INPUT_COLOR_ATTACHMENT_BIT = 0x00000040, // Controls input coherency of color attachment reads
+ VK_MEMORY_INPUT_DEPTH_STENCIL_ATTACHMENT_BIT = 0x00000080, // Controls input coherency of depth/stencil attachment reads
+ VK_MEMORY_INPUT_COPY_BIT = 0x00000100, // Controls input coherency of copy operations
+ VK_MAX_ENUM(_VK_MEMORY_INPUT_FLAGS)
+} VK_MEMORY_INPUT_FLAGS;
+
+typedef enum _VK_ATTACHMENT_LOAD_OP
+{
+ VK_ATTACHMENT_LOAD_OP_LOAD = 0x00000000,
+ VK_ATTACHMENT_LOAD_OP_CLEAR = 0x00000001,
+ VK_ATTACHMENT_LOAD_OP_DONT_CARE = 0x00000002,
+
+ VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE = VK_ATTACHMENT_LOAD_OP_LOAD,
+ VK_ATTACHMENT_LOAD_OP_END_RANGE = VK_ATTACHMENT_LOAD_OP_DONT_CARE,
+ VK_NUM_ATTACHMENT_LOAD_OP = (VK_ATTACHMENT_LOAD_OP_END_RANGE - VK_ATTACHMENT_LOAD_OP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_ATTACHMENT_LOAD_OP)
+} VK_ATTACHMENT_LOAD_OP;
+
+typedef enum _VK_ATTACHMENT_STORE_OP
+{
+ VK_ATTACHMENT_STORE_OP_STORE = 0x00000000,
+ VK_ATTACHMENT_STORE_OP_RESOLVE_MSAA = 0x00000001,
+ VK_ATTACHMENT_STORE_OP_DONT_CARE = 0x00000002,
+
+ VK_ATTACHMENT_STORE_OP_BEGIN_RANGE = VK_ATTACHMENT_STORE_OP_STORE,
+ VK_ATTACHMENT_STORE_OP_END_RANGE = VK_ATTACHMENT_STORE_OP_DONT_CARE,
+ VK_NUM_ATTACHMENT_STORE_OP = (VK_ATTACHMENT_STORE_OP_END_RANGE - VK_ATTACHMENT_STORE_OP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_ATTACHMENT_STORE_OP)
+} VK_ATTACHMENT_STORE_OP;
+
+typedef enum _VK_IMAGE_TYPE
+{
+ VK_IMAGE_1D = 0x00000000,
+ VK_IMAGE_2D = 0x00000001,
+ VK_IMAGE_3D = 0x00000002,
+
+ VK_IMAGE_TYPE_BEGIN_RANGE = VK_IMAGE_1D,
+ VK_IMAGE_TYPE_END_RANGE = VK_IMAGE_3D,
+ VK_NUM_IMAGE_TYPE = (VK_IMAGE_TYPE_END_RANGE - VK_IMAGE_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_TYPE)
+} VK_IMAGE_TYPE;
+
+typedef enum _VK_IMAGE_TILING
+{
+ VK_LINEAR_TILING = 0x00000000,
+ VK_OPTIMAL_TILING = 0x00000001,
+
+ VK_IMAGE_TILING_BEGIN_RANGE = VK_LINEAR_TILING,
+ VK_IMAGE_TILING_END_RANGE = VK_OPTIMAL_TILING,
+ VK_NUM_IMAGE_TILING = (VK_IMAGE_TILING_END_RANGE - VK_IMAGE_TILING_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_TILING)
+} VK_IMAGE_TILING;
+
+typedef enum _VK_IMAGE_VIEW_TYPE
+{
+ VK_IMAGE_VIEW_1D = 0x00000000,
+ VK_IMAGE_VIEW_2D = 0x00000001,
+ VK_IMAGE_VIEW_3D = 0x00000002,
+ VK_IMAGE_VIEW_CUBE = 0x00000003,
+
+ VK_IMAGE_VIEW_TYPE_BEGIN_RANGE = VK_IMAGE_VIEW_1D,
+ VK_IMAGE_VIEW_TYPE_END_RANGE = VK_IMAGE_VIEW_CUBE,
+ VK_NUM_IMAGE_VIEW_TYPE = (VK_IMAGE_VIEW_TYPE_END_RANGE - VK_IMAGE_VIEW_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_VIEW_TYPE)
+} VK_IMAGE_VIEW_TYPE;
+
+typedef enum _VK_IMAGE_ASPECT
+{
+ VK_IMAGE_ASPECT_COLOR = 0x00000000,
+ VK_IMAGE_ASPECT_DEPTH = 0x00000001,
+ VK_IMAGE_ASPECT_STENCIL = 0x00000002,
+
+ VK_IMAGE_ASPECT_BEGIN_RANGE = VK_IMAGE_ASPECT_COLOR,
+ VK_IMAGE_ASPECT_END_RANGE = VK_IMAGE_ASPECT_STENCIL,
+ VK_NUM_IMAGE_ASPECT = (VK_IMAGE_ASPECT_END_RANGE - VK_IMAGE_ASPECT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_ASPECT)
+} VK_IMAGE_ASPECT;
+
+typedef enum _VK_CHANNEL_SWIZZLE
+{
+ VK_CHANNEL_SWIZZLE_ZERO = 0x00000000,
+ VK_CHANNEL_SWIZZLE_ONE = 0x00000001,
+ VK_CHANNEL_SWIZZLE_R = 0x00000002,
+ VK_CHANNEL_SWIZZLE_G = 0x00000003,
+ VK_CHANNEL_SWIZZLE_B = 0x00000004,
+ VK_CHANNEL_SWIZZLE_A = 0x00000005,
+
+ VK_CHANNEL_SWIZZLE_BEGIN_RANGE = VK_CHANNEL_SWIZZLE_ZERO,
+ VK_CHANNEL_SWIZZLE_END_RANGE = VK_CHANNEL_SWIZZLE_A,
+ VK_NUM_CHANNEL_SWIZZLE = (VK_CHANNEL_SWIZZLE_END_RANGE - VK_CHANNEL_SWIZZLE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_CHANNEL_SWIZZLE)
+} VK_CHANNEL_SWIZZLE;
+
+typedef enum _VK_DESCRIPTOR_TYPE
+{
+ VK_DESCRIPTOR_TYPE_SAMPLER = 0x00000000,
+ VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE = 0x00000001,
+ VK_DESCRIPTOR_TYPE_TEXTURE = 0x00000002,
+ VK_DESCRIPTOR_TYPE_TEXTURE_BUFFER = 0x00000003,
+ VK_DESCRIPTOR_TYPE_IMAGE = 0x00000004,
+ VK_DESCRIPTOR_TYPE_IMAGE_BUFFER = 0x00000005,
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 0x00000006,
+ VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER = 0x00000007,
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC = 0x00000008,
+ VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC = 0x00000009,
- XGL_DESCRIPTOR_TYPE_BEGIN_RANGE = XGL_DESCRIPTOR_TYPE_SAMPLER,
- XGL_DESCRIPTOR_TYPE_END_RANGE = XGL_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC,
- XGL_NUM_DESCRIPTOR_TYPE = (XGL_DESCRIPTOR_TYPE_END_RANGE - XGL_DESCRIPTOR_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DESCRIPTOR_TYPE)
-} XGL_DESCRIPTOR_TYPE;
-
-typedef enum _XGL_DESCRIPTOR_POOL_USAGE
-{
- XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT = 0x00000000,
- XGL_DESCRIPTOR_POOL_USAGE_DYNAMIC = 0x00000001,
-
- XGL_DESCRIPTOR_POOL_USAGE_BEGIN_RANGE = XGL_DESCRIPTOR_POOL_USAGE_ONE_SHOT,
- XGL_DESCRIPTOR_POOL_USAGE_END_RANGE = XGL_DESCRIPTOR_POOL_USAGE_DYNAMIC,
- XGL_NUM_DESCRIPTOR_POOL_USAGE = (XGL_DESCRIPTOR_POOL_USAGE_END_RANGE - XGL_DESCRIPTOR_POOL_USAGE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DESCRIPTOR_POOL_USAGE)
-} XGL_DESCRIPTOR_POOL_USAGE;
-
-typedef enum _XGL_DESCRIPTOR_UPDATE_MODE
-{
- XGL_DESCRIPTOR_UDPATE_MODE_COPY = 0x00000000,
- XGL_DESCRIPTOR_UPDATE_MODE_FASTEST = 0x00000001,
-
- XGL_DESCRIPTOR_UPDATE_MODE_BEGIN_RANGE = XGL_DESCRIPTOR_UDPATE_MODE_COPY,
- XGL_DESCRIPTOR_UPDATE_MODE_END_RANGE = XGL_DESCRIPTOR_UPDATE_MODE_FASTEST,
- XGL_NUM_DESCRIPTOR_UPDATE_MODE = (XGL_DESCRIPTOR_UPDATE_MODE_END_RANGE - XGL_DESCRIPTOR_UPDATE_MODE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DESCRIPTOR_UPDATE_MODE)
-} XGL_DESCRIPTOR_UPDATE_MODE;
-
-typedef enum _XGL_DESCRIPTOR_SET_USAGE
-{
- XGL_DESCRIPTOR_SET_USAGE_ONE_SHOT = 0x00000000,
- XGL_DESCRIPTOR_SET_USAGE_STATIC = 0x00000001,
-
- XGL_DESCRIPTOR_SET_USAGE_BEGIN_RANGE = XGL_DESCRIPTOR_SET_USAGE_ONE_SHOT,
- XGL_DESCRIPTOR_SET_USAGE_END_RANGE = XGL_DESCRIPTOR_SET_USAGE_STATIC,
- XGL_NUM_DESCRIPTOR_SET_USAGE = (XGL_DESCRIPTOR_SET_USAGE_END_RANGE - XGL_DESCRIPTOR_SET_USAGE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DESCRIPTOR_SET_USAGE)
-} XGL_DESCRIPTOR_SET_USAGE;
-
-typedef enum _XGL_QUERY_TYPE
-{
- XGL_QUERY_OCCLUSION = 0x00000000,
- XGL_QUERY_PIPELINE_STATISTICS = 0x00000001,
-
- XGL_QUERY_TYPE_BEGIN_RANGE = XGL_QUERY_OCCLUSION,
- XGL_QUERY_TYPE_END_RANGE = XGL_QUERY_PIPELINE_STATISTICS,
- XGL_NUM_QUERY_TYPE = (XGL_QUERY_TYPE_END_RANGE - XGL_QUERY_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_QUERY_TYPE)
-} XGL_QUERY_TYPE;
-
-typedef enum _XGL_TIMESTAMP_TYPE
-{
- XGL_TIMESTAMP_TOP = 0x00000000,
- XGL_TIMESTAMP_BOTTOM = 0x00000001,
-
- XGL_TIMESTAMP_TYPE_BEGIN_RANGE = XGL_TIMESTAMP_TOP,
- XGL_TIMESTAMP_TYPE_END_RANGE = XGL_TIMESTAMP_BOTTOM,
- XGL_NUM_TIMESTAMP_TYPE = (XGL_TIMESTAMP_TYPE_END_RANGE - XGL_TIMESTAMP_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_TIMESTEAMP_TYPE)
-} XGL_TIMESTAMP_TYPE;
-
-typedef enum _XGL_BORDER_COLOR_TYPE
-{
- XGL_BORDER_COLOR_OPAQUE_WHITE = 0x00000000,
- XGL_BORDER_COLOR_TRANSPARENT_BLACK = 0x00000001,
- XGL_BORDER_COLOR_OPAQUE_BLACK = 0x00000002,
-
- XGL_BORDER_COLOR_TYPE_BEGIN_RANGE = XGL_BORDER_COLOR_OPAQUE_WHITE,
- XGL_BORDER_COLOR_TYPE_END_RANGE = XGL_BORDER_COLOR_OPAQUE_BLACK,
- XGL_NUM_BORDER_COLOR_TYPE = (XGL_BORDER_COLOR_TYPE_END_RANGE - XGL_BORDER_COLOR_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_BORDER_COLOR_TYPE)
-} XGL_BORDER_COLOR_TYPE;
-
-typedef enum _XGL_PIPELINE_BIND_POINT
-{
- XGL_PIPELINE_BIND_POINT_COMPUTE = 0x00000000,
- XGL_PIPELINE_BIND_POINT_GRAPHICS = 0x00000001,
-
- XGL_PIPELINE_BIND_POINT_BEGIN_RANGE = XGL_PIPELINE_BIND_POINT_COMPUTE,
- XGL_PIPELINE_BIND_POINT_END_RANGE = XGL_PIPELINE_BIND_POINT_GRAPHICS,
- XGL_NUM_PIPELINE_BIND_POINT = (XGL_PIPELINE_BIND_POINT_END_RANGE - XGL_PIPELINE_BIND_POINT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PIPELINE_BIND_POINT)
-} XGL_PIPELINE_BIND_POINT;
-
-typedef enum _XGL_STATE_BIND_POINT
-{
- XGL_STATE_BIND_VIEWPORT = 0x00000000,
- XGL_STATE_BIND_RASTER = 0x00000001,
- XGL_STATE_BIND_COLOR_BLEND = 0x00000002,
- XGL_STATE_BIND_DEPTH_STENCIL = 0x00000003,
-
- XGL_STATE_BIND_POINT_BEGIN_RANGE = XGL_STATE_BIND_VIEWPORT,
- XGL_STATE_BIND_POINT_END_RANGE = XGL_STATE_BIND_DEPTH_STENCIL,
- XGL_NUM_STATE_BIND_POINT = (XGL_STATE_BIND_POINT_END_RANGE - XGL_STATE_BIND_POINT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_STATE_BIND_POINT)
-} XGL_STATE_BIND_POINT;
-
-typedef enum _XGL_PRIMITIVE_TOPOLOGY
-{
- XGL_TOPOLOGY_POINT_LIST = 0x00000000,
- XGL_TOPOLOGY_LINE_LIST = 0x00000001,
- XGL_TOPOLOGY_LINE_STRIP = 0x00000002,
- XGL_TOPOLOGY_TRIANGLE_LIST = 0x00000003,
- XGL_TOPOLOGY_TRIANGLE_STRIP = 0x00000004,
- XGL_TOPOLOGY_TRIANGLE_FAN = 0x00000005,
- XGL_TOPOLOGY_LINE_LIST_ADJ = 0x00000006,
- XGL_TOPOLOGY_LINE_STRIP_ADJ = 0x00000007,
- XGL_TOPOLOGY_TRIANGLE_LIST_ADJ = 0x00000008,
- XGL_TOPOLOGY_TRIANGLE_STRIP_ADJ = 0x00000009,
- XGL_TOPOLOGY_PATCH = 0x0000000a,
-
- XGL_PRIMITIVE_TOPOLOGY_BEGIN_RANGE = XGL_TOPOLOGY_POINT_LIST,
- XGL_PRIMITIVE_TOPOLOGY_END_RANGE = XGL_TOPOLOGY_PATCH,
- XGL_NUM_PRIMITIVE_TOPOLOGY = (XGL_PRIMITIVE_TOPOLOGY_END_RANGE - XGL_PRIMITIVE_TOPOLOGY_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PRIMITIVE_TOPOLOGY)
-} XGL_PRIMITIVE_TOPOLOGY;
-
-typedef enum _XGL_INDEX_TYPE
-{
- XGL_INDEX_8 = 0x00000000,
- XGL_INDEX_16 = 0x00000001,
- XGL_INDEX_32 = 0x00000002,
-
- XGL_INDEX_TYPE_BEGIN_RANGE = XGL_INDEX_8,
- XGL_INDEX_TYPE_END_RANGE = XGL_INDEX_32,
- XGL_NUM_INDEX_TYPE = (XGL_INDEX_TYPE_END_RANGE - XGL_INDEX_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_INDEX_TYPE)
-} XGL_INDEX_TYPE;
-
-typedef enum _XGL_TEX_FILTER
-{
- XGL_TEX_FILTER_NEAREST = 0,
- XGL_TEX_FILTER_LINEAR = 1,
-
- XGL_TEX_FILTER_BEGIN_RANGE = XGL_TEX_FILTER_NEAREST,
- XGL_TEX_FILTER_END_RANGE = XGL_TEX_FILTER_LINEAR,
- XGL_NUM_TEX_FILTER = (XGL_TEX_FILTER_END_RANGE - XGL_TEX_FILTER_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_TEX_FILTER)
-} XGL_TEX_FILTER;
-
-typedef enum _XGL_TEX_MIPMAP_MODE
-{
- XGL_TEX_MIPMAP_BASE = 0, // Always choose base level
- XGL_TEX_MIPMAP_NEAREST = 1, // Choose nearest mip level
- XGL_TEX_MIPMAP_LINEAR = 2, // Linear filter between mip levels
-
- XGL_TEX_MIPMAP_BEGIN_RANGE = XGL_TEX_MIPMAP_BASE,
- XGL_TEX_MIPMAP_END_RANGE = XGL_TEX_MIPMAP_LINEAR,
- XGL_NUM_TEX_MIPMAP = (XGL_TEX_MIPMAP_END_RANGE - XGL_TEX_MIPMAP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_TEX_MIPMAP_MODE)
-} XGL_TEX_MIPMAP_MODE;
-
-typedef enum _XGL_TEX_ADDRESS
-{
- XGL_TEX_ADDRESS_WRAP = 0x00000000,
- XGL_TEX_ADDRESS_MIRROR = 0x00000001,
- XGL_TEX_ADDRESS_CLAMP = 0x00000002,
- XGL_TEX_ADDRESS_MIRROR_ONCE = 0x00000003,
- XGL_TEX_ADDRESS_CLAMP_BORDER = 0x00000004,
-
- XGL_TEX_ADDRESS_BEGIN_RANGE = XGL_TEX_ADDRESS_WRAP,
- XGL_TEX_ADDRESS_END_RANGE = XGL_TEX_ADDRESS_CLAMP_BORDER,
- XGL_NUM_TEX_ADDRESS = (XGL_TEX_ADDRESS_END_RANGE - XGL_TEX_ADDRESS_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_TEX_ADDRESS)
-} XGL_TEX_ADDRESS;
-
-typedef enum _XGL_COMPARE_FUNC
-{
- XGL_COMPARE_NEVER = 0x00000000,
- XGL_COMPARE_LESS = 0x00000001,
- XGL_COMPARE_EQUAL = 0x00000002,
- XGL_COMPARE_LESS_EQUAL = 0x00000003,
- XGL_COMPARE_GREATER = 0x00000004,
- XGL_COMPARE_NOT_EQUAL = 0x00000005,
- XGL_COMPARE_GREATER_EQUAL = 0x00000006,
- XGL_COMPARE_ALWAYS = 0x00000007,
-
- XGL_COMPARE_FUNC_BEGIN_RANGE = XGL_COMPARE_NEVER,
- XGL_COMPARE_FUNC_END_RANGE = XGL_COMPARE_ALWAYS,
- XGL_NUM_COMPARE_FUNC = (XGL_COMPARE_FUNC_END_RANGE - XGL_COMPARE_FUNC_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_COMPARE_FUNC)
-} XGL_COMPARE_FUNC;
-
-typedef enum _XGL_FILL_MODE
-{
- XGL_FILL_POINTS = 0x00000000,
- XGL_FILL_WIREFRAME = 0x00000001,
- XGL_FILL_SOLID = 0x00000002,
-
- XGL_FILL_MODE_BEGIN_RANGE = XGL_FILL_POINTS,
- XGL_FILL_MODE_END_RANGE = XGL_FILL_SOLID,
- XGL_NUM_FILL_MODE = (XGL_FILL_MODE_END_RANGE - XGL_FILL_MODE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_FILL_MODE)
-} XGL_FILL_MODE;
-
-typedef enum _XGL_CULL_MODE
-{
- XGL_CULL_NONE = 0x00000000,
- XGL_CULL_FRONT = 0x00000001,
- XGL_CULL_BACK = 0x00000002,
- XGL_CULL_FRONT_AND_BACK = 0x00000003,
-
- XGL_CULL_MODE_BEGIN_RANGE = XGL_CULL_NONE,
- XGL_CULL_MODE_END_RANGE = XGL_CULL_FRONT_AND_BACK,
- XGL_NUM_CULL_MODE = (XGL_CULL_MODE_END_RANGE - XGL_CULL_MODE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_CULL_MODE)
-} XGL_CULL_MODE;
-
-typedef enum _XGL_FACE_ORIENTATION
-{
- XGL_FRONT_FACE_CCW = 0x00000000,
- XGL_FRONT_FACE_CW = 0x00000001,
-
- XGL_FACE_ORIENTATION_BEGIN_RANGE = XGL_FRONT_FACE_CCW,
- XGL_FACE_ORIENTATION_END_RANGE = XGL_FRONT_FACE_CW,
- XGL_NUM_FACE_ORIENTATION = (XGL_FACE_ORIENTATION_END_RANGE - XGL_FACE_ORIENTATION_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_FACE_ORIENTATION)
-} XGL_FACE_ORIENTATION;
-
-typedef enum _XGL_PROVOKING_VERTEX_CONVENTION
-{
- XGL_PROVOKING_VERTEX_FIRST = 0x00000000,
- XGL_PROVOKING_VERTEX_LAST = 0x00000001,
-
- XGL_PROVOKING_VERTEX_BEGIN_RANGE = XGL_PROVOKING_VERTEX_FIRST,
- XGL_PROVOKING_VERTEX_END_RANGE = XGL_PROVOKING_VERTEX_LAST,
- XGL_NUM_PROVOKING_VERTEX_CONVENTION = (XGL_PROVOKING_VERTEX_END_RANGE - XGL_PROVOKING_VERTEX_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PROVOKING_VERTEX_CONVENTION)
-} XGL_PROVOKING_VERTEX_CONVENTION;
-
-typedef enum _XGL_COORDINATE_ORIGIN
-{
- XGL_COORDINATE_ORIGIN_UPPER_LEFT = 0x00000000,
- XGL_COORDINATE_ORIGIN_LOWER_LEFT = 0x00000001,
-
- XGL_COORDINATE_ORIGIN_BEGIN_RANGE = XGL_COORDINATE_ORIGIN_UPPER_LEFT,
- XGL_COORDINATE_ORIGIN_END_RANGE = XGL_COORDINATE_ORIGIN_LOWER_LEFT,
- XGL_NUM_COORDINATE_ORIGIN = (XGL_COORDINATE_ORIGIN_END_RANGE - XGL_COORDINATE_ORIGIN_END_RANGE + 1),
- XGL_MAX_ENUM(_XGL_COORDINATE_ORIGIN)
-} XGL_COORDINATE_ORIGIN;
-
-typedef enum _XGL_DEPTH_MODE
-{
- XGL_DEPTH_MODE_ZERO_TO_ONE = 0x00000000,
- XGL_DEPTH_MODE_NEGATIVE_ONE_TO_ONE = 0x00000001,
-
- XGL_DEPTH_MODE_BEGIN_RANGE = XGL_DEPTH_MODE_ZERO_TO_ONE,
- XGL_DEPTH_MODE_END_RANGE = XGL_DEPTH_MODE_NEGATIVE_ONE_TO_ONE,
- XGL_NUM_DEPTH_MODE = (XGL_DEPTH_MODE_END_RANGE - XGL_DEPTH_MODE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_DEPTH_MODE)
-} XGL_DEPTH_MODE;
-
-typedef enum _XGL_BLEND
-{
- XGL_BLEND_ZERO = 0x00000000,
- XGL_BLEND_ONE = 0x00000001,
- XGL_BLEND_SRC_COLOR = 0x00000002,
- XGL_BLEND_ONE_MINUS_SRC_COLOR = 0x00000003,
- XGL_BLEND_DEST_COLOR = 0x00000004,
- XGL_BLEND_ONE_MINUS_DEST_COLOR = 0x00000005,
- XGL_BLEND_SRC_ALPHA = 0x00000006,
- XGL_BLEND_ONE_MINUS_SRC_ALPHA = 0x00000007,
- XGL_BLEND_DEST_ALPHA = 0x00000008,
- XGL_BLEND_ONE_MINUS_DEST_ALPHA = 0x00000009,
- XGL_BLEND_CONSTANT_COLOR = 0x0000000a,
- XGL_BLEND_ONE_MINUS_CONSTANT_COLOR = 0x0000000b,
- XGL_BLEND_CONSTANT_ALPHA = 0x0000000c,
- XGL_BLEND_ONE_MINUS_CONSTANT_ALPHA = 0x0000000d,
- XGL_BLEND_SRC_ALPHA_SATURATE = 0x0000000e,
- XGL_BLEND_SRC1_COLOR = 0x0000000f,
- XGL_BLEND_ONE_MINUS_SRC1_COLOR = 0x00000010,
- XGL_BLEND_SRC1_ALPHA = 0x00000011,
- XGL_BLEND_ONE_MINUS_SRC1_ALPHA = 0x00000012,
-
- XGL_BLEND_BEGIN_RANGE = XGL_BLEND_ZERO,
- XGL_BLEND_END_RANGE = XGL_BLEND_ONE_MINUS_SRC1_ALPHA,
- XGL_NUM_BLEND = (XGL_BLEND_END_RANGE - XGL_BLEND_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_BLEND)
-} XGL_BLEND;
-
-typedef enum _XGL_BLEND_FUNC
-{
- XGL_BLEND_FUNC_ADD = 0x00000000,
- XGL_BLEND_FUNC_SUBTRACT = 0x00000001,
- XGL_BLEND_FUNC_REVERSE_SUBTRACT = 0x00000002,
- XGL_BLEND_FUNC_MIN = 0x00000003,
- XGL_BLEND_FUNC_MAX = 0x00000004,
-
- XGL_BLEND_FUNC_BEGIN_RANGE = XGL_BLEND_FUNC_ADD,
- XGL_BLEND_FUNC_END_RANGE = XGL_BLEND_FUNC_MAX,
- XGL_NUM_BLEND_FUNC = (XGL_BLEND_FUNC_END_RANGE - XGL_BLEND_FUNC_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_BLEND_FUNC)
-} XGL_BLEND_FUNC;
-
-typedef enum _XGL_STENCIL_OP
-{
- XGL_STENCIL_OP_KEEP = 0x00000000,
- XGL_STENCIL_OP_ZERO = 0x00000001,
- XGL_STENCIL_OP_REPLACE = 0x00000002,
- XGL_STENCIL_OP_INC_CLAMP = 0x00000003,
- XGL_STENCIL_OP_DEC_CLAMP = 0x00000004,
- XGL_STENCIL_OP_INVERT = 0x00000005,
- XGL_STENCIL_OP_INC_WRAP = 0x00000006,
- XGL_STENCIL_OP_DEC_WRAP = 0x00000007,
-
- XGL_STENCIL_OP_BEGIN_RANGE = XGL_STENCIL_OP_KEEP,
- XGL_STENCIL_OP_END_RANGE = XGL_STENCIL_OP_DEC_WRAP,
- XGL_NUM_STENCIL_OP = (XGL_STENCIL_OP_END_RANGE - XGL_STENCIL_OP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_STENCIL_OP)
-} XGL_STENCIL_OP;
-
-typedef enum _XGL_LOGIC_OP
-{
- XGL_LOGIC_OP_COPY = 0x00000000,
- XGL_LOGIC_OP_CLEAR = 0x00000001,
- XGL_LOGIC_OP_AND = 0x00000002,
- XGL_LOGIC_OP_AND_REVERSE = 0x00000003,
- XGL_LOGIC_OP_AND_INVERTED = 0x00000004,
- XGL_LOGIC_OP_NOOP = 0x00000005,
- XGL_LOGIC_OP_XOR = 0x00000006,
- XGL_LOGIC_OP_OR = 0x00000007,
- XGL_LOGIC_OP_NOR = 0x00000008,
- XGL_LOGIC_OP_EQUIV = 0x00000009,
- XGL_LOGIC_OP_INVERT = 0x0000000a,
- XGL_LOGIC_OP_OR_REVERSE = 0x0000000b,
- XGL_LOGIC_OP_COPY_INVERTED = 0x0000000c,
- XGL_LOGIC_OP_OR_INVERTED = 0x0000000d,
- XGL_LOGIC_OP_NAND = 0x0000000e,
- XGL_LOGIC_OP_SET = 0x0000000f,
-
- XGL_LOGIC_OP_BEGIN_RANGE = XGL_LOGIC_OP_COPY,
- XGL_LOGIC_OP_END_RANGE = XGL_LOGIC_OP_SET,
- XGL_NUM_LOGIC_OP = (XGL_LOGIC_OP_END_RANGE - XGL_LOGIC_OP_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_LOGIC_OP)
-} XGL_LOGIC_OP;
-
-typedef enum _XGL_SYSTEM_ALLOC_TYPE
-{
- XGL_SYSTEM_ALLOC_API_OBJECT = 0x00000000,
- XGL_SYSTEM_ALLOC_INTERNAL = 0x00000001,
- XGL_SYSTEM_ALLOC_INTERNAL_TEMP = 0x00000002,
- XGL_SYSTEM_ALLOC_INTERNAL_SHADER = 0x00000003,
- XGL_SYSTEM_ALLOC_DEBUG = 0x00000004,
-
- XGL_SYSTEM_ALLOC_BEGIN_RANGE = XGL_SYSTEM_ALLOC_API_OBJECT,
- XGL_SYSTEM_ALLOC_END_RANGE = XGL_SYSTEM_ALLOC_DEBUG,
- XGL_NUM_SYSTEM_ALLOC_TYPE = (XGL_SYSTEM_ALLOC_END_RANGE - XGL_SYSTEM_ALLOC_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_SYSTEM_ALLOC_TYPE)
-} XGL_SYSTEM_ALLOC_TYPE;
-
-typedef enum _XGL_PHYSICAL_GPU_TYPE
-{
- XGL_GPU_TYPE_OTHER = 0x00000000,
- XGL_GPU_TYPE_INTEGRATED = 0x00000001,
- XGL_GPU_TYPE_DISCRETE = 0x00000002,
- XGL_GPU_TYPE_VIRTUAL = 0x00000003,
-
- XGL_PHYSICAL_GPU_TYPE_BEGIN_RANGE = XGL_GPU_TYPE_OTHER,
- XGL_PHYSICAL_GPU_TYPE_END_RANGE = XGL_GPU_TYPE_VIRTUAL,
- XGL_NUM_PHYSICAL_GPU_TYPE = (XGL_PHYSICAL_GPU_TYPE_END_RANGE - XGL_PHYSICAL_GPU_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PHYSICAL_GPU_TYPE)
-} XGL_PHYSICAL_GPU_TYPE;
-
-typedef enum _XGL_PHYSICAL_GPU_INFO_TYPE
-{
- // Info type for xglGetGpuInfo()
- XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES = 0x00000000,
- XGL_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE = 0x00000001,
- XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES = 0x00000002,
- XGL_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES = 0x00000003,
-
- XGL_INFO_TYPE_PHYSICAL_GPU_BEGIN_RANGE = XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
- XGL_INFO_TYPE_PHYSICAL_GPU_END_RANGE = XGL_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES,
- XGL_NUM_INFO_TYPE_PHYSICAL_GPU = (XGL_INFO_TYPE_PHYSICAL_GPU_END_RANGE - XGL_INFO_TYPE_PHYSICAL_GPU_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PHYSICAL_GPU_INFO_TYPE)
-} XGL_PHYSICAL_GPU_INFO_TYPE;
-
-typedef enum _XGL_FORMAT_INFO_TYPE
-{
- // Info type for xglGetFormatInfo()
- XGL_INFO_TYPE_FORMAT_PROPERTIES = 0x00000000,
-
- XGL_INFO_TYPE_FORMAT_BEGIN_RANGE = XGL_INFO_TYPE_FORMAT_PROPERTIES,
- XGL_INFO_TYPE_FORMAT_END_RANGE = XGL_INFO_TYPE_FORMAT_PROPERTIES,
- XGL_NUM_INFO_TYPE_FORMAT = (XGL_INFO_TYPE_FORMAT_END_RANGE - XGL_INFO_TYPE_FORMAT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_FORMAT_INFO_TYPE)
-} XGL_FORMAT_INFO_TYPE;
-
-typedef enum _XGL_SUBRESOURCE_INFO_TYPE
-{
- // Info type for xglGetImageSubresourceInfo()
- XGL_INFO_TYPE_SUBRESOURCE_LAYOUT = 0x00000000,
-
- XGL_INFO_TYPE_SUBRESOURCE_BEGIN_RANGE = XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
- XGL_INFO_TYPE_SUBRESOURCE_END_RANGE = XGL_INFO_TYPE_SUBRESOURCE_LAYOUT,
- XGL_NUM_INFO_TYPE_SUBRESOURCE = (XGL_INFO_TYPE_SUBRESOURCE_END_RANGE - XGL_INFO_TYPE_SUBRESOURCE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_SUBRESOURCE_INFO_TYPE)
-} XGL_SUBRESOURCE_INFO_TYPE;
-
-typedef enum _XGL_OBJECT_INFO_TYPE
-{
- // Info type for xglGetObjectInfo()
- XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT = 0x00000000,
- XGL_INFO_TYPE_MEMORY_REQUIREMENTS = 0x00000001,
- XGL_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS = 0x00000002,
- XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS = 0x00000003,
-
- XGL_INFO_TYPE_BEGIN_RANGE = XGL_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
- XGL_INFO_TYPE_END_RANGE = XGL_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
- XGL_NUM_INFO_TYPE = (XGL_INFO_TYPE_END_RANGE - XGL_INFO_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_OBJECT_INFO_TYPE)
-} XGL_OBJECT_INFO_TYPE;
-
-typedef enum _XGL_VALIDATION_LEVEL
-{
- XGL_VALIDATION_LEVEL_0 = 0x00000000,
- XGL_VALIDATION_LEVEL_1 = 0x00000001,
- XGL_VALIDATION_LEVEL_2 = 0x00000002,
- XGL_VALIDATION_LEVEL_3 = 0x00000003,
- XGL_VALIDATION_LEVEL_4 = 0x00000004,
-
- XGL_VALIDATION_LEVEL_BEGIN_RANGE = XGL_VALIDATION_LEVEL_0,
- XGL_VALIDATION_LEVEL_END_RANGE = XGL_VALIDATION_LEVEL_4,
- XGL_NUM_VALIDATION_LEVEL = (XGL_VALIDATION_LEVEL_END_RANGE - XGL_VALIDATION_LEVEL_BEGIN_RANGE + 1),
-
- XGL_MAX_ENUM(_XGL_VALIDATION_LEVEL)
-} XGL_VALIDATION_LEVEL;
+ VK_DESCRIPTOR_TYPE_BEGIN_RANGE = VK_DESCRIPTOR_TYPE_SAMPLER,
+ VK_DESCRIPTOR_TYPE_END_RANGE = VK_DESCRIPTOR_TYPE_SHADER_STORAGE_BUFFER_DYNAMIC,
+ VK_NUM_DESCRIPTOR_TYPE = (VK_DESCRIPTOR_TYPE_END_RANGE - VK_DESCRIPTOR_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DESCRIPTOR_TYPE)
+} VK_DESCRIPTOR_TYPE;
+
+typedef enum _VK_DESCRIPTOR_POOL_USAGE
+{
+ VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT = 0x00000000,
+ VK_DESCRIPTOR_POOL_USAGE_DYNAMIC = 0x00000001,
+
+ VK_DESCRIPTOR_POOL_USAGE_BEGIN_RANGE = VK_DESCRIPTOR_POOL_USAGE_ONE_SHOT,
+ VK_DESCRIPTOR_POOL_USAGE_END_RANGE = VK_DESCRIPTOR_POOL_USAGE_DYNAMIC,
+ VK_NUM_DESCRIPTOR_POOL_USAGE = (VK_DESCRIPTOR_POOL_USAGE_END_RANGE - VK_DESCRIPTOR_POOL_USAGE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DESCRIPTOR_POOL_USAGE)
+} VK_DESCRIPTOR_POOL_USAGE;
+
+typedef enum _VK_DESCRIPTOR_UPDATE_MODE
+{
+ VK_DESCRIPTOR_UDPATE_MODE_COPY = 0x00000000,
+ VK_DESCRIPTOR_UPDATE_MODE_FASTEST = 0x00000001,
+
+ VK_DESCRIPTOR_UPDATE_MODE_BEGIN_RANGE = VK_DESCRIPTOR_UDPATE_MODE_COPY,
+ VK_DESCRIPTOR_UPDATE_MODE_END_RANGE = VK_DESCRIPTOR_UPDATE_MODE_FASTEST,
+ VK_NUM_DESCRIPTOR_UPDATE_MODE = (VK_DESCRIPTOR_UPDATE_MODE_END_RANGE - VK_DESCRIPTOR_UPDATE_MODE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DESCRIPTOR_UPDATE_MODE)
+} VK_DESCRIPTOR_UPDATE_MODE;
+
+typedef enum _VK_DESCRIPTOR_SET_USAGE
+{
+ VK_DESCRIPTOR_SET_USAGE_ONE_SHOT = 0x00000000,
+ VK_DESCRIPTOR_SET_USAGE_STATIC = 0x00000001,
+
+ VK_DESCRIPTOR_SET_USAGE_BEGIN_RANGE = VK_DESCRIPTOR_SET_USAGE_ONE_SHOT,
+ VK_DESCRIPTOR_SET_USAGE_END_RANGE = VK_DESCRIPTOR_SET_USAGE_STATIC,
+ VK_NUM_DESCRIPTOR_SET_USAGE = (VK_DESCRIPTOR_SET_USAGE_END_RANGE - VK_DESCRIPTOR_SET_USAGE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DESCRIPTOR_SET_USAGE)
+} VK_DESCRIPTOR_SET_USAGE;
+
+typedef enum _VK_QUERY_TYPE
+{
+ VK_QUERY_OCCLUSION = 0x00000000,
+ VK_QUERY_PIPELINE_STATISTICS = 0x00000001,
+
+ VK_QUERY_TYPE_BEGIN_RANGE = VK_QUERY_OCCLUSION,
+ VK_QUERY_TYPE_END_RANGE = VK_QUERY_PIPELINE_STATISTICS,
+ VK_NUM_QUERY_TYPE = (VK_QUERY_TYPE_END_RANGE - VK_QUERY_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_QUERY_TYPE)
+} VK_QUERY_TYPE;
+
+typedef enum _VK_TIMESTAMP_TYPE
+{
+ VK_TIMESTAMP_TOP = 0x00000000,
+ VK_TIMESTAMP_BOTTOM = 0x00000001,
+
+ VK_TIMESTAMP_TYPE_BEGIN_RANGE = VK_TIMESTAMP_TOP,
+ VK_TIMESTAMP_TYPE_END_RANGE = VK_TIMESTAMP_BOTTOM,
+ VK_NUM_TIMESTAMP_TYPE = (VK_TIMESTAMP_TYPE_END_RANGE - VK_TIMESTAMP_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_TIMESTEAMP_TYPE)
+} VK_TIMESTAMP_TYPE;
+
+typedef enum _VK_BORDER_COLOR_TYPE
+{
+ VK_BORDER_COLOR_OPAQUE_WHITE = 0x00000000,
+ VK_BORDER_COLOR_TRANSPARENT_BLACK = 0x00000001,
+ VK_BORDER_COLOR_OPAQUE_BLACK = 0x00000002,
+
+ VK_BORDER_COLOR_TYPE_BEGIN_RANGE = VK_BORDER_COLOR_OPAQUE_WHITE,
+ VK_BORDER_COLOR_TYPE_END_RANGE = VK_BORDER_COLOR_OPAQUE_BLACK,
+ VK_NUM_BORDER_COLOR_TYPE = (VK_BORDER_COLOR_TYPE_END_RANGE - VK_BORDER_COLOR_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_BORDER_COLOR_TYPE)
+} VK_BORDER_COLOR_TYPE;
+
+typedef enum _VK_PIPELINE_BIND_POINT
+{
+ VK_PIPELINE_BIND_POINT_COMPUTE = 0x00000000,
+ VK_PIPELINE_BIND_POINT_GRAPHICS = 0x00000001,
+
+ VK_PIPELINE_BIND_POINT_BEGIN_RANGE = VK_PIPELINE_BIND_POINT_COMPUTE,
+ VK_PIPELINE_BIND_POINT_END_RANGE = VK_PIPELINE_BIND_POINT_GRAPHICS,
+ VK_NUM_PIPELINE_BIND_POINT = (VK_PIPELINE_BIND_POINT_END_RANGE - VK_PIPELINE_BIND_POINT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PIPELINE_BIND_POINT)
+} VK_PIPELINE_BIND_POINT;
+
+typedef enum _VK_STATE_BIND_POINT
+{
+ VK_STATE_BIND_VIEWPORT = 0x00000000,
+ VK_STATE_BIND_RASTER = 0x00000001,
+ VK_STATE_BIND_COLOR_BLEND = 0x00000002,
+ VK_STATE_BIND_DEPTH_STENCIL = 0x00000003,
+
+ VK_STATE_BIND_POINT_BEGIN_RANGE = VK_STATE_BIND_VIEWPORT,
+ VK_STATE_BIND_POINT_END_RANGE = VK_STATE_BIND_DEPTH_STENCIL,
+ VK_NUM_STATE_BIND_POINT = (VK_STATE_BIND_POINT_END_RANGE - VK_STATE_BIND_POINT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_STATE_BIND_POINT)
+} VK_STATE_BIND_POINT;
+
+typedef enum _VK_PRIMITIVE_TOPOLOGY
+{
+ VK_TOPOLOGY_POINT_LIST = 0x00000000,
+ VK_TOPOLOGY_LINE_LIST = 0x00000001,
+ VK_TOPOLOGY_LINE_STRIP = 0x00000002,
+ VK_TOPOLOGY_TRIANGLE_LIST = 0x00000003,
+ VK_TOPOLOGY_TRIANGLE_STRIP = 0x00000004,
+ VK_TOPOLOGY_TRIANGLE_FAN = 0x00000005,
+ VK_TOPOLOGY_LINE_LIST_ADJ = 0x00000006,
+ VK_TOPOLOGY_LINE_STRIP_ADJ = 0x00000007,
+ VK_TOPOLOGY_TRIANGLE_LIST_ADJ = 0x00000008,
+ VK_TOPOLOGY_TRIANGLE_STRIP_ADJ = 0x00000009,
+ VK_TOPOLOGY_PATCH = 0x0000000a,
+
+ VK_PRIMITIVE_TOPOLOGY_BEGIN_RANGE = VK_TOPOLOGY_POINT_LIST,
+ VK_PRIMITIVE_TOPOLOGY_END_RANGE = VK_TOPOLOGY_PATCH,
+ VK_NUM_PRIMITIVE_TOPOLOGY = (VK_PRIMITIVE_TOPOLOGY_END_RANGE - VK_PRIMITIVE_TOPOLOGY_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PRIMITIVE_TOPOLOGY)
+} VK_PRIMITIVE_TOPOLOGY;
+
+typedef enum _VK_INDEX_TYPE
+{
+ VK_INDEX_8 = 0x00000000,
+ VK_INDEX_16 = 0x00000001,
+ VK_INDEX_32 = 0x00000002,
+
+ VK_INDEX_TYPE_BEGIN_RANGE = VK_INDEX_8,
+ VK_INDEX_TYPE_END_RANGE = VK_INDEX_32,
+ VK_NUM_INDEX_TYPE = (VK_INDEX_TYPE_END_RANGE - VK_INDEX_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_INDEX_TYPE)
+} VK_INDEX_TYPE;
+
+typedef enum _VK_TEX_FILTER
+{
+ VK_TEX_FILTER_NEAREST = 0,
+ VK_TEX_FILTER_LINEAR = 1,
+
+ VK_TEX_FILTER_BEGIN_RANGE = VK_TEX_FILTER_NEAREST,
+ VK_TEX_FILTER_END_RANGE = VK_TEX_FILTER_LINEAR,
+ VK_NUM_TEX_FILTER = (VK_TEX_FILTER_END_RANGE - VK_TEX_FILTER_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_TEX_FILTER)
+} VK_TEX_FILTER;
+
+typedef enum _VK_TEX_MIPMAP_MODE
+{
+ VK_TEX_MIPMAP_BASE = 0, // Always choose base level
+ VK_TEX_MIPMAP_NEAREST = 1, // Choose nearest mip level
+ VK_TEX_MIPMAP_LINEAR = 2, // Linear filter between mip levels
+
+ VK_TEX_MIPMAP_BEGIN_RANGE = VK_TEX_MIPMAP_BASE,
+ VK_TEX_MIPMAP_END_RANGE = VK_TEX_MIPMAP_LINEAR,
+ VK_NUM_TEX_MIPMAP = (VK_TEX_MIPMAP_END_RANGE - VK_TEX_MIPMAP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_TEX_MIPMAP_MODE)
+} VK_TEX_MIPMAP_MODE;
+
+typedef enum _VK_TEX_ADDRESS
+{
+ VK_TEX_ADDRESS_WRAP = 0x00000000,
+ VK_TEX_ADDRESS_MIRROR = 0x00000001,
+ VK_TEX_ADDRESS_CLAMP = 0x00000002,
+ VK_TEX_ADDRESS_MIRROR_ONCE = 0x00000003,
+ VK_TEX_ADDRESS_CLAMP_BORDER = 0x00000004,
+
+ VK_TEX_ADDRESS_BEGIN_RANGE = VK_TEX_ADDRESS_WRAP,
+ VK_TEX_ADDRESS_END_RANGE = VK_TEX_ADDRESS_CLAMP_BORDER,
+ VK_NUM_TEX_ADDRESS = (VK_TEX_ADDRESS_END_RANGE - VK_TEX_ADDRESS_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_TEX_ADDRESS)
+} VK_TEX_ADDRESS;
+
+typedef enum _VK_COMPARE_FUNC
+{
+ VK_COMPARE_NEVER = 0x00000000,
+ VK_COMPARE_LESS = 0x00000001,
+ VK_COMPARE_EQUAL = 0x00000002,
+ VK_COMPARE_LESS_EQUAL = 0x00000003,
+ VK_COMPARE_GREATER = 0x00000004,
+ VK_COMPARE_NOT_EQUAL = 0x00000005,
+ VK_COMPARE_GREATER_EQUAL = 0x00000006,
+ VK_COMPARE_ALWAYS = 0x00000007,
+
+ VK_COMPARE_FUNC_BEGIN_RANGE = VK_COMPARE_NEVER,
+ VK_COMPARE_FUNC_END_RANGE = VK_COMPARE_ALWAYS,
+ VK_NUM_COMPARE_FUNC = (VK_COMPARE_FUNC_END_RANGE - VK_COMPARE_FUNC_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_COMPARE_FUNC)
+} VK_COMPARE_FUNC;
+
+typedef enum _VK_FILL_MODE
+{
+ VK_FILL_POINTS = 0x00000000,
+ VK_FILL_WIREFRAME = 0x00000001,
+ VK_FILL_SOLID = 0x00000002,
+
+ VK_FILL_MODE_BEGIN_RANGE = VK_FILL_POINTS,
+ VK_FILL_MODE_END_RANGE = VK_FILL_SOLID,
+ VK_NUM_FILL_MODE = (VK_FILL_MODE_END_RANGE - VK_FILL_MODE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_FILL_MODE)
+} VK_FILL_MODE;
+
+typedef enum _VK_CULL_MODE
+{
+ VK_CULL_NONE = 0x00000000,
+ VK_CULL_FRONT = 0x00000001,
+ VK_CULL_BACK = 0x00000002,
+ VK_CULL_FRONT_AND_BACK = 0x00000003,
+
+ VK_CULL_MODE_BEGIN_RANGE = VK_CULL_NONE,
+ VK_CULL_MODE_END_RANGE = VK_CULL_FRONT_AND_BACK,
+ VK_NUM_CULL_MODE = (VK_CULL_MODE_END_RANGE - VK_CULL_MODE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_CULL_MODE)
+} VK_CULL_MODE;
+
+typedef enum _VK_FACE_ORIENTATION
+{
+ VK_FRONT_FACE_CCW = 0x00000000,
+ VK_FRONT_FACE_CW = 0x00000001,
+
+ VK_FACE_ORIENTATION_BEGIN_RANGE = VK_FRONT_FACE_CCW,
+ VK_FACE_ORIENTATION_END_RANGE = VK_FRONT_FACE_CW,
+ VK_NUM_FACE_ORIENTATION = (VK_FACE_ORIENTATION_END_RANGE - VK_FACE_ORIENTATION_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_FACE_ORIENTATION)
+} VK_FACE_ORIENTATION;
+
+typedef enum _VK_PROVOKING_VERTEX_CONVENTION
+{
+ VK_PROVOKING_VERTEX_FIRST = 0x00000000,
+ VK_PROVOKING_VERTEX_LAST = 0x00000001,
+
+ VK_PROVOKING_VERTEX_BEGIN_RANGE = VK_PROVOKING_VERTEX_FIRST,
+ VK_PROVOKING_VERTEX_END_RANGE = VK_PROVOKING_VERTEX_LAST,
+ VK_NUM_PROVOKING_VERTEX_CONVENTION = (VK_PROVOKING_VERTEX_END_RANGE - VK_PROVOKING_VERTEX_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PROVOKING_VERTEX_CONVENTION)
+} VK_PROVOKING_VERTEX_CONVENTION;
+
+typedef enum _VK_COORDINATE_ORIGIN
+{
+ VK_COORDINATE_ORIGIN_UPPER_LEFT = 0x00000000,
+ VK_COORDINATE_ORIGIN_LOWER_LEFT = 0x00000001,
+
+ VK_COORDINATE_ORIGIN_BEGIN_RANGE = VK_COORDINATE_ORIGIN_UPPER_LEFT,
+ VK_COORDINATE_ORIGIN_END_RANGE = VK_COORDINATE_ORIGIN_LOWER_LEFT,
+ VK_NUM_COORDINATE_ORIGIN = (VK_COORDINATE_ORIGIN_END_RANGE - VK_COORDINATE_ORIGIN_END_RANGE + 1),
+ VK_MAX_ENUM(_VK_COORDINATE_ORIGIN)
+} VK_COORDINATE_ORIGIN;
+
+typedef enum _VK_DEPTH_MODE
+{
+ VK_DEPTH_MODE_ZERO_TO_ONE = 0x00000000,
+ VK_DEPTH_MODE_NEGATIVE_ONE_TO_ONE = 0x00000001,
+
+ VK_DEPTH_MODE_BEGIN_RANGE = VK_DEPTH_MODE_ZERO_TO_ONE,
+ VK_DEPTH_MODE_END_RANGE = VK_DEPTH_MODE_NEGATIVE_ONE_TO_ONE,
+ VK_NUM_DEPTH_MODE = (VK_DEPTH_MODE_END_RANGE - VK_DEPTH_MODE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_DEPTH_MODE)
+} VK_DEPTH_MODE;
+
+typedef enum _VK_BLEND
+{
+ VK_BLEND_ZERO = 0x00000000,
+ VK_BLEND_ONE = 0x00000001,
+ VK_BLEND_SRC_COLOR = 0x00000002,
+ VK_BLEND_ONE_MINUS_SRC_COLOR = 0x00000003,
+ VK_BLEND_DEST_COLOR = 0x00000004,
+ VK_BLEND_ONE_MINUS_DEST_COLOR = 0x00000005,
+ VK_BLEND_SRC_ALPHA = 0x00000006,
+ VK_BLEND_ONE_MINUS_SRC_ALPHA = 0x00000007,
+ VK_BLEND_DEST_ALPHA = 0x00000008,
+ VK_BLEND_ONE_MINUS_DEST_ALPHA = 0x00000009,
+ VK_BLEND_CONSTANT_COLOR = 0x0000000a,
+ VK_BLEND_ONE_MINUS_CONSTANT_COLOR = 0x0000000b,
+ VK_BLEND_CONSTANT_ALPHA = 0x0000000c,
+ VK_BLEND_ONE_MINUS_CONSTANT_ALPHA = 0x0000000d,
+ VK_BLEND_SRC_ALPHA_SATURATE = 0x0000000e,
+ VK_BLEND_SRC1_COLOR = 0x0000000f,
+ VK_BLEND_ONE_MINUS_SRC1_COLOR = 0x00000010,
+ VK_BLEND_SRC1_ALPHA = 0x00000011,
+ VK_BLEND_ONE_MINUS_SRC1_ALPHA = 0x00000012,
+
+ VK_BLEND_BEGIN_RANGE = VK_BLEND_ZERO,
+ VK_BLEND_END_RANGE = VK_BLEND_ONE_MINUS_SRC1_ALPHA,
+ VK_NUM_BLEND = (VK_BLEND_END_RANGE - VK_BLEND_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_BLEND)
+} VK_BLEND;
+
+typedef enum _VK_BLEND_FUNC
+{
+ VK_BLEND_FUNC_ADD = 0x00000000,
+ VK_BLEND_FUNC_SUBTRACT = 0x00000001,
+ VK_BLEND_FUNC_REVERSE_SUBTRACT = 0x00000002,
+ VK_BLEND_FUNC_MIN = 0x00000003,
+ VK_BLEND_FUNC_MAX = 0x00000004,
+
+ VK_BLEND_FUNC_BEGIN_RANGE = VK_BLEND_FUNC_ADD,
+ VK_BLEND_FUNC_END_RANGE = VK_BLEND_FUNC_MAX,
+ VK_NUM_BLEND_FUNC = (VK_BLEND_FUNC_END_RANGE - VK_BLEND_FUNC_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_BLEND_FUNC)
+} VK_BLEND_FUNC;
+
+typedef enum _VK_STENCIL_OP
+{
+ VK_STENCIL_OP_KEEP = 0x00000000,
+ VK_STENCIL_OP_ZERO = 0x00000001,
+ VK_STENCIL_OP_REPLACE = 0x00000002,
+ VK_STENCIL_OP_INC_CLAMP = 0x00000003,
+ VK_STENCIL_OP_DEC_CLAMP = 0x00000004,
+ VK_STENCIL_OP_INVERT = 0x00000005,
+ VK_STENCIL_OP_INC_WRAP = 0x00000006,
+ VK_STENCIL_OP_DEC_WRAP = 0x00000007,
+
+ VK_STENCIL_OP_BEGIN_RANGE = VK_STENCIL_OP_KEEP,
+ VK_STENCIL_OP_END_RANGE = VK_STENCIL_OP_DEC_WRAP,
+ VK_NUM_STENCIL_OP = (VK_STENCIL_OP_END_RANGE - VK_STENCIL_OP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_STENCIL_OP)
+} VK_STENCIL_OP;
+
+typedef enum _VK_LOGIC_OP
+{
+ VK_LOGIC_OP_COPY = 0x00000000,
+ VK_LOGIC_OP_CLEAR = 0x00000001,
+ VK_LOGIC_OP_AND = 0x00000002,
+ VK_LOGIC_OP_AND_REVERSE = 0x00000003,
+ VK_LOGIC_OP_AND_INVERTED = 0x00000004,
+ VK_LOGIC_OP_NOOP = 0x00000005,
+ VK_LOGIC_OP_XOR = 0x00000006,
+ VK_LOGIC_OP_OR = 0x00000007,
+ VK_LOGIC_OP_NOR = 0x00000008,
+ VK_LOGIC_OP_EQUIV = 0x00000009,
+ VK_LOGIC_OP_INVERT = 0x0000000a,
+ VK_LOGIC_OP_OR_REVERSE = 0x0000000b,
+ VK_LOGIC_OP_COPY_INVERTED = 0x0000000c,
+ VK_LOGIC_OP_OR_INVERTED = 0x0000000d,
+ VK_LOGIC_OP_NAND = 0x0000000e,
+ VK_LOGIC_OP_SET = 0x0000000f,
+
+ VK_LOGIC_OP_BEGIN_RANGE = VK_LOGIC_OP_COPY,
+ VK_LOGIC_OP_END_RANGE = VK_LOGIC_OP_SET,
+ VK_NUM_LOGIC_OP = (VK_LOGIC_OP_END_RANGE - VK_LOGIC_OP_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_LOGIC_OP)
+} VK_LOGIC_OP;
+
+typedef enum _VK_SYSTEM_ALLOC_TYPE
+{
+ VK_SYSTEM_ALLOC_API_OBJECT = 0x00000000,
+ VK_SYSTEM_ALLOC_INTERNAL = 0x00000001,
+ VK_SYSTEM_ALLOC_INTERNAL_TEMP = 0x00000002,
+ VK_SYSTEM_ALLOC_INTERNAL_SHADER = 0x00000003,
+ VK_SYSTEM_ALLOC_DEBUG = 0x00000004,
+
+ VK_SYSTEM_ALLOC_BEGIN_RANGE = VK_SYSTEM_ALLOC_API_OBJECT,
+ VK_SYSTEM_ALLOC_END_RANGE = VK_SYSTEM_ALLOC_DEBUG,
+ VK_NUM_SYSTEM_ALLOC_TYPE = (VK_SYSTEM_ALLOC_END_RANGE - VK_SYSTEM_ALLOC_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_SYSTEM_ALLOC_TYPE)
+} VK_SYSTEM_ALLOC_TYPE;
+
+typedef enum _VK_PHYSICAL_GPU_TYPE
+{
+ VK_GPU_TYPE_OTHER = 0x00000000,
+ VK_GPU_TYPE_INTEGRATED = 0x00000001,
+ VK_GPU_TYPE_DISCRETE = 0x00000002,
+ VK_GPU_TYPE_VIRTUAL = 0x00000003,
+
+ VK_PHYSICAL_GPU_TYPE_BEGIN_RANGE = VK_GPU_TYPE_OTHER,
+ VK_PHYSICAL_GPU_TYPE_END_RANGE = VK_GPU_TYPE_VIRTUAL,
+ VK_NUM_PHYSICAL_GPU_TYPE = (VK_PHYSICAL_GPU_TYPE_END_RANGE - VK_PHYSICAL_GPU_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PHYSICAL_GPU_TYPE)
+} VK_PHYSICAL_GPU_TYPE;
+
+typedef enum _VK_PHYSICAL_GPU_INFO_TYPE
+{
+ // Info type for vkGetGpuInfo()
+ VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES = 0x00000000,
+ VK_INFO_TYPE_PHYSICAL_GPU_PERFORMANCE = 0x00000001,
+ VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES = 0x00000002,
+ VK_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES = 0x00000003,
+
+ VK_INFO_TYPE_PHYSICAL_GPU_BEGIN_RANGE = VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES,
+ VK_INFO_TYPE_PHYSICAL_GPU_END_RANGE = VK_INFO_TYPE_PHYSICAL_GPU_MEMORY_PROPERTIES,
+ VK_NUM_INFO_TYPE_PHYSICAL_GPU = (VK_INFO_TYPE_PHYSICAL_GPU_END_RANGE - VK_INFO_TYPE_PHYSICAL_GPU_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PHYSICAL_GPU_INFO_TYPE)
+} VK_PHYSICAL_GPU_INFO_TYPE;
+
+typedef enum _VK_FORMAT_INFO_TYPE
+{
+ // Info type for vkGetFormatInfo()
+ VK_INFO_TYPE_FORMAT_PROPERTIES = 0x00000000,
+
+ VK_INFO_TYPE_FORMAT_BEGIN_RANGE = VK_INFO_TYPE_FORMAT_PROPERTIES,
+ VK_INFO_TYPE_FORMAT_END_RANGE = VK_INFO_TYPE_FORMAT_PROPERTIES,
+ VK_NUM_INFO_TYPE_FORMAT = (VK_INFO_TYPE_FORMAT_END_RANGE - VK_INFO_TYPE_FORMAT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_FORMAT_INFO_TYPE)
+} VK_FORMAT_INFO_TYPE;
+
+typedef enum _VK_SUBRESOURCE_INFO_TYPE
+{
+ // Info type for vkGetImageSubresourceInfo()
+ VK_INFO_TYPE_SUBRESOURCE_LAYOUT = 0x00000000,
+
+ VK_INFO_TYPE_SUBRESOURCE_BEGIN_RANGE = VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ VK_INFO_TYPE_SUBRESOURCE_END_RANGE = VK_INFO_TYPE_SUBRESOURCE_LAYOUT,
+ VK_NUM_INFO_TYPE_SUBRESOURCE = (VK_INFO_TYPE_SUBRESOURCE_END_RANGE - VK_INFO_TYPE_SUBRESOURCE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_SUBRESOURCE_INFO_TYPE)
+} VK_SUBRESOURCE_INFO_TYPE;
+
+typedef enum _VK_OBJECT_INFO_TYPE
+{
+ // Info type for vkGetObjectInfo()
+ VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT = 0x00000000,
+ VK_INFO_TYPE_MEMORY_REQUIREMENTS = 0x00000001,
+ VK_INFO_TYPE_BUFFER_MEMORY_REQUIREMENTS = 0x00000002,
+ VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS = 0x00000003,
+
+ VK_INFO_TYPE_BEGIN_RANGE = VK_INFO_TYPE_MEMORY_ALLOCATION_COUNT,
+ VK_INFO_TYPE_END_RANGE = VK_INFO_TYPE_IMAGE_MEMORY_REQUIREMENTS,
+ VK_NUM_INFO_TYPE = (VK_INFO_TYPE_END_RANGE - VK_INFO_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_OBJECT_INFO_TYPE)
+} VK_OBJECT_INFO_TYPE;
+
+typedef enum _VK_VALIDATION_LEVEL
+{
+ VK_VALIDATION_LEVEL_0 = 0x00000000,
+ VK_VALIDATION_LEVEL_1 = 0x00000001,
+ VK_VALIDATION_LEVEL_2 = 0x00000002,
+ VK_VALIDATION_LEVEL_3 = 0x00000003,
+ VK_VALIDATION_LEVEL_4 = 0x00000004,
+
+ VK_VALIDATION_LEVEL_BEGIN_RANGE = VK_VALIDATION_LEVEL_0,
+ VK_VALIDATION_LEVEL_END_RANGE = VK_VALIDATION_LEVEL_4,
+ VK_NUM_VALIDATION_LEVEL = (VK_VALIDATION_LEVEL_END_RANGE - VK_VALIDATION_LEVEL_BEGIN_RANGE + 1),
+
+ VK_MAX_ENUM(_VK_VALIDATION_LEVEL)
+} VK_VALIDATION_LEVEL;
// ------------------------------------------------------------------------------------------------
// Error and return codes
-typedef enum _XGL_RESULT
+typedef enum _VK_RESULT
{
// Return codes for successful operation execution (>= 0)
- XGL_SUCCESS = 0x0000000,
- XGL_UNSUPPORTED = 0x0000001,
- XGL_NOT_READY = 0x0000002,
- XGL_TIMEOUT = 0x0000003,
- XGL_EVENT_SET = 0x0000004,
- XGL_EVENT_RESET = 0x0000005,
+ VK_SUCCESS = 0x0000000,
+ VK_UNSUPPORTED = 0x0000001,
+ VK_NOT_READY = 0x0000002,
+ VK_TIMEOUT = 0x0000003,
+ VK_EVENT_SET = 0x0000004,
+ VK_EVENT_RESET = 0x0000005,
// Error codes (negative values)
- XGL_ERROR_UNKNOWN = -(0x00000001),
- XGL_ERROR_UNAVAILABLE = -(0x00000002),
- XGL_ERROR_INITIALIZATION_FAILED = -(0x00000003),
- XGL_ERROR_OUT_OF_MEMORY = -(0x00000004),
- XGL_ERROR_OUT_OF_GPU_MEMORY = -(0x00000005),
- XGL_ERROR_DEVICE_ALREADY_CREATED = -(0x00000006),
- XGL_ERROR_DEVICE_LOST = -(0x00000007),
- XGL_ERROR_INVALID_POINTER = -(0x00000008),
- XGL_ERROR_INVALID_VALUE = -(0x00000009),
- XGL_ERROR_INVALID_HANDLE = -(0x0000000A),
- XGL_ERROR_INVALID_ORDINAL = -(0x0000000B),
- XGL_ERROR_INVALID_MEMORY_SIZE = -(0x0000000C),
- XGL_ERROR_INVALID_EXTENSION = -(0x0000000D),
- XGL_ERROR_INVALID_FLAGS = -(0x0000000E),
- XGL_ERROR_INVALID_ALIGNMENT = -(0x0000000F),
- XGL_ERROR_INVALID_FORMAT = -(0x00000010),
- XGL_ERROR_INVALID_IMAGE = -(0x00000011),
- XGL_ERROR_INVALID_DESCRIPTOR_SET_DATA = -(0x00000012),
- XGL_ERROR_INVALID_QUEUE_TYPE = -(0x00000013),
- XGL_ERROR_INVALID_OBJECT_TYPE = -(0x00000014),
- XGL_ERROR_UNSUPPORTED_SHADER_IL_VERSION = -(0x00000015),
- XGL_ERROR_BAD_SHADER_CODE = -(0x00000016),
- XGL_ERROR_BAD_PIPELINE_DATA = -(0x00000017),
- XGL_ERROR_TOO_MANY_MEMORY_REFERENCES = -(0x00000018),
- XGL_ERROR_NOT_MAPPABLE = -(0x00000019),
- XGL_ERROR_MEMORY_MAP_FAILED = -(0x0000001A),
- XGL_ERROR_MEMORY_UNMAP_FAILED = -(0x0000001B),
- XGL_ERROR_INCOMPATIBLE_DEVICE = -(0x0000001C),
- XGL_ERROR_INCOMPATIBLE_DRIVER = -(0x0000001D),
- XGL_ERROR_INCOMPLETE_COMMAND_BUFFER = -(0x0000001E),
- XGL_ERROR_BUILDING_COMMAND_BUFFER = -(0x0000001F),
- XGL_ERROR_MEMORY_NOT_BOUND = -(0x00000020),
- XGL_ERROR_INCOMPATIBLE_QUEUE = -(0x00000021),
- XGL_ERROR_NOT_SHAREABLE = -(0x00000022),
- XGL_MAX_ENUM(_XGL_RESULT_CODE)
-} XGL_RESULT;
+ VK_ERROR_UNKNOWN = -(0x00000001),
+ VK_ERROR_UNAVAILABLE = -(0x00000002),
+ VK_ERROR_INITIALIZATION_FAILED = -(0x00000003),
+ VK_ERROR_OUT_OF_MEMORY = -(0x00000004),
+ VK_ERROR_OUT_OF_GPU_MEMORY = -(0x00000005),
+ VK_ERROR_DEVICE_ALREADY_CREATED = -(0x00000006),
+ VK_ERROR_DEVICE_LOST = -(0x00000007),
+ VK_ERROR_INVALID_POINTER = -(0x00000008),
+ VK_ERROR_INVALID_VALUE = -(0x00000009),
+ VK_ERROR_INVALID_HANDLE = -(0x0000000A),
+ VK_ERROR_INVALID_ORDINAL = -(0x0000000B),
+ VK_ERROR_INVALID_MEMORY_SIZE = -(0x0000000C),
+ VK_ERROR_INVALID_EXTENSION = -(0x0000000D),
+ VK_ERROR_INVALID_FLAGS = -(0x0000000E),
+ VK_ERROR_INVALID_ALIGNMENT = -(0x0000000F),
+ VK_ERROR_INVALID_FORMAT = -(0x00000010),
+ VK_ERROR_INVALID_IMAGE = -(0x00000011),
+ VK_ERROR_INVALID_DESCRIPTOR_SET_DATA = -(0x00000012),
+ VK_ERROR_INVALID_QUEUE_TYPE = -(0x00000013),
+ VK_ERROR_INVALID_OBJECT_TYPE = -(0x00000014),
+ VK_ERROR_UNSUPPORTED_SHADER_IL_VERSION = -(0x00000015),
+ VK_ERROR_BAD_SHADER_CODE = -(0x00000016),
+ VK_ERROR_BAD_PIPELINE_DATA = -(0x00000017),
+ VK_ERROR_TOO_MANY_MEMORY_REFERENCES = -(0x00000018),
+ VK_ERROR_NOT_MAPPABLE = -(0x00000019),
+ VK_ERROR_MEMORY_MAP_FAILED = -(0x0000001A),
+ VK_ERROR_MEMORY_UNMAP_FAILED = -(0x0000001B),
+ VK_ERROR_INCOMPATIBLE_DEVICE = -(0x0000001C),
+ VK_ERROR_INCOMPATIBLE_DRIVER = -(0x0000001D),
+ VK_ERROR_INCOMPLETE_COMMAND_BUFFER = -(0x0000001E),
+ VK_ERROR_BUILDING_COMMAND_BUFFER = -(0x0000001F),
+ VK_ERROR_MEMORY_NOT_BOUND = -(0x00000020),
+ VK_ERROR_INCOMPATIBLE_QUEUE = -(0x00000021),
+ VK_ERROR_NOT_SHAREABLE = -(0x00000022),
+ VK_MAX_ENUM(_VK_RESULT_CODE)
+} VK_RESULT;
// ------------------------------------------------------------------------------------------------
-// XGL format definitions
-
-typedef enum _XGL_VERTEX_INPUT_STEP_RATE
-{
- XGL_VERTEX_INPUT_STEP_RATE_VERTEX = 0x0,
- XGL_VERTEX_INPUT_STEP_RATE_INSTANCE = 0x1,
- XGL_VERTEX_INPUT_STEP_RATE_DRAW = 0x2, //Optional
-
- XGL_VERTEX_INPUT_STEP_RATE_BEGIN_RANGE = XGL_VERTEX_INPUT_STEP_RATE_VERTEX,
- XGL_VERTEX_INPUT_STEP_RATE_END_RANGE = XGL_VERTEX_INPUT_STEP_RATE_DRAW,
- XGL_NUM_VERTEX_INPUT_STEP_RATE = (XGL_VERTEX_INPUT_STEP_RATE_END_RANGE - XGL_VERTEX_INPUT_STEP_RATE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_VERTEX_INPUT_STEP_RATE)
-} XGL_VERTEX_INPUT_STEP_RATE;
-
-typedef enum _XGL_FORMAT
-{
- XGL_FMT_UNDEFINED = 0x00000000,
- XGL_FMT_R4G4_UNORM = 0x00000001,
- XGL_FMT_R4G4_USCALED = 0x00000002,
- XGL_FMT_R4G4B4A4_UNORM = 0x00000003,
- XGL_FMT_R4G4B4A4_USCALED = 0x00000004,
- XGL_FMT_R5G6B5_UNORM = 0x00000005,
- XGL_FMT_R5G6B5_USCALED = 0x00000006,
- XGL_FMT_R5G5B5A1_UNORM = 0x00000007,
- XGL_FMT_R5G5B5A1_USCALED = 0x00000008,
- XGL_FMT_R8_UNORM = 0x00000009,
- XGL_FMT_R8_SNORM = 0x0000000A,
- XGL_FMT_R8_USCALED = 0x0000000B,
- XGL_FMT_R8_SSCALED = 0x0000000C,
- XGL_FMT_R8_UINT = 0x0000000D,
- XGL_FMT_R8_SINT = 0x0000000E,
- XGL_FMT_R8_SRGB = 0x0000000F,
- XGL_FMT_R8G8_UNORM = 0x00000010,
- XGL_FMT_R8G8_SNORM = 0x00000011,
- XGL_FMT_R8G8_USCALED = 0x00000012,
- XGL_FMT_R8G8_SSCALED = 0x00000013,
- XGL_FMT_R8G8_UINT = 0x00000014,
- XGL_FMT_R8G8_SINT = 0x00000015,
- XGL_FMT_R8G8_SRGB = 0x00000016,
- XGL_FMT_R8G8B8_UNORM = 0x00000017,
- XGL_FMT_R8G8B8_SNORM = 0x00000018,
- XGL_FMT_R8G8B8_USCALED = 0x00000019,
- XGL_FMT_R8G8B8_SSCALED = 0x0000001A,
- XGL_FMT_R8G8B8_UINT = 0x0000001B,
- XGL_FMT_R8G8B8_SINT = 0x0000001C,
- XGL_FMT_R8G8B8_SRGB = 0x0000001D,
- XGL_FMT_R8G8B8A8_UNORM = 0x0000001E,
- XGL_FMT_R8G8B8A8_SNORM = 0x0000001F,
- XGL_FMT_R8G8B8A8_USCALED = 0x00000020,
- XGL_FMT_R8G8B8A8_SSCALED = 0x00000021,
- XGL_FMT_R8G8B8A8_UINT = 0x00000022,
- XGL_FMT_R8G8B8A8_SINT = 0x00000023,
- XGL_FMT_R8G8B8A8_SRGB = 0x00000024,
- XGL_FMT_R10G10B10A2_UNORM = 0x00000025,
- XGL_FMT_R10G10B10A2_SNORM = 0x00000026,
- XGL_FMT_R10G10B10A2_USCALED = 0x00000027,
- XGL_FMT_R10G10B10A2_SSCALED = 0x00000028,
- XGL_FMT_R10G10B10A2_UINT = 0x00000029,
- XGL_FMT_R10G10B10A2_SINT = 0x0000002A,
- XGL_FMT_R16_UNORM = 0x0000002B,
- XGL_FMT_R16_SNORM = 0x0000002C,
- XGL_FMT_R16_USCALED = 0x0000002D,
- XGL_FMT_R16_SSCALED = 0x0000002E,
- XGL_FMT_R16_UINT = 0x0000002F,
- XGL_FMT_R16_SINT = 0x00000030,
- XGL_FMT_R16_SFLOAT = 0x00000031,
- XGL_FMT_R16G16_UNORM = 0x00000032,
- XGL_FMT_R16G16_SNORM = 0x00000033,
- XGL_FMT_R16G16_USCALED = 0x00000034,
- XGL_FMT_R16G16_SSCALED = 0x00000035,
- XGL_FMT_R16G16_UINT = 0x00000036,
- XGL_FMT_R16G16_SINT = 0x00000037,
- XGL_FMT_R16G16_SFLOAT = 0x00000038,
- XGL_FMT_R16G16B16_UNORM = 0x00000039,
- XGL_FMT_R16G16B16_SNORM = 0x0000003A,
- XGL_FMT_R16G16B16_USCALED = 0x0000003B,
- XGL_FMT_R16G16B16_SSCALED = 0x0000003C,
- XGL_FMT_R16G16B16_UINT = 0x0000003D,
- XGL_FMT_R16G16B16_SINT = 0x0000003E,
- XGL_FMT_R16G16B16_SFLOAT = 0x0000003F,
- XGL_FMT_R16G16B16A16_UNORM = 0x00000040,
- XGL_FMT_R16G16B16A16_SNORM = 0x00000041,
- XGL_FMT_R16G16B16A16_USCALED = 0x00000042,
- XGL_FMT_R16G16B16A16_SSCALED = 0x00000043,
- XGL_FMT_R16G16B16A16_UINT = 0x00000044,
- XGL_FMT_R16G16B16A16_SINT = 0x00000045,
- XGL_FMT_R16G16B16A16_SFLOAT = 0x00000046,
- XGL_FMT_R32_UINT = 0x00000047,
- XGL_FMT_R32_SINT = 0x00000048,
- XGL_FMT_R32_SFLOAT = 0x00000049,
- XGL_FMT_R32G32_UINT = 0x0000004A,
- XGL_FMT_R32G32_SINT = 0x0000004B,
- XGL_FMT_R32G32_SFLOAT = 0x0000004C,
- XGL_FMT_R32G32B32_UINT = 0x0000004D,
- XGL_FMT_R32G32B32_SINT = 0x0000004E,
- XGL_FMT_R32G32B32_SFLOAT = 0x0000004F,
- XGL_FMT_R32G32B32A32_UINT = 0x00000050,
- XGL_FMT_R32G32B32A32_SINT = 0x00000051,
- XGL_FMT_R32G32B32A32_SFLOAT = 0x00000052,
- XGL_FMT_R64_SFLOAT = 0x00000053,
- XGL_FMT_R64G64_SFLOAT = 0x00000054,
- XGL_FMT_R64G64B64_SFLOAT = 0x00000055,
- XGL_FMT_R64G64B64A64_SFLOAT = 0x00000056,
- XGL_FMT_R11G11B10_UFLOAT = 0x00000057,
- XGL_FMT_R9G9B9E5_UFLOAT = 0x00000058,
- XGL_FMT_D16_UNORM = 0x00000059,
- XGL_FMT_D24_UNORM = 0x0000005A,
- XGL_FMT_D32_SFLOAT = 0x0000005B,
- XGL_FMT_S8_UINT = 0x0000005C,
- XGL_FMT_D16_UNORM_S8_UINT = 0x0000005D,
- XGL_FMT_D24_UNORM_S8_UINT = 0x0000005E,
- XGL_FMT_D32_SFLOAT_S8_UINT = 0x0000005F,
- XGL_FMT_BC1_RGB_UNORM = 0x00000060,
- XGL_FMT_BC1_RGB_SRGB = 0x00000061,
- XGL_FMT_BC1_RGBA_UNORM = 0x00000062,
- XGL_FMT_BC1_RGBA_SRGB = 0x00000063,
- XGL_FMT_BC2_UNORM = 0x00000064,
- XGL_FMT_BC2_SRGB = 0x00000065,
- XGL_FMT_BC3_UNORM = 0x00000066,
- XGL_FMT_BC3_SRGB = 0x00000067,
- XGL_FMT_BC4_UNORM = 0x00000068,
- XGL_FMT_BC4_SNORM = 0x00000069,
- XGL_FMT_BC5_UNORM = 0x0000006A,
- XGL_FMT_BC5_SNORM = 0x0000006B,
- XGL_FMT_BC6H_UFLOAT = 0x0000006C,
- XGL_FMT_BC6H_SFLOAT = 0x0000006D,
- XGL_FMT_BC7_UNORM = 0x0000006E,
- XGL_FMT_BC7_SRGB = 0x0000006F,
- XGL_FMT_ETC2_R8G8B8_UNORM = 0x00000070,
- XGL_FMT_ETC2_R8G8B8_SRGB = 0x00000071,
- XGL_FMT_ETC2_R8G8B8A1_UNORM = 0x00000072,
- XGL_FMT_ETC2_R8G8B8A1_SRGB = 0x00000073,
- XGL_FMT_ETC2_R8G8B8A8_UNORM = 0x00000074,
- XGL_FMT_ETC2_R8G8B8A8_SRGB = 0x00000075,
- XGL_FMT_EAC_R11_UNORM = 0x00000076,
- XGL_FMT_EAC_R11_SNORM = 0x00000077,
- XGL_FMT_EAC_R11G11_UNORM = 0x00000078,
- XGL_FMT_EAC_R11G11_SNORM = 0x00000079,
- XGL_FMT_ASTC_4x4_UNORM = 0x0000007A,
- XGL_FMT_ASTC_4x4_SRGB = 0x0000007B,
- XGL_FMT_ASTC_5x4_UNORM = 0x0000007C,
- XGL_FMT_ASTC_5x4_SRGB = 0x0000007D,
- XGL_FMT_ASTC_5x5_UNORM = 0x0000007E,
- XGL_FMT_ASTC_5x5_SRGB = 0x0000007F,
- XGL_FMT_ASTC_6x5_UNORM = 0x00000080,
- XGL_FMT_ASTC_6x5_SRGB = 0x00000081,
- XGL_FMT_ASTC_6x6_UNORM = 0x00000082,
- XGL_FMT_ASTC_6x6_SRGB = 0x00000083,
- XGL_FMT_ASTC_8x5_UNORM = 0x00000084,
- XGL_FMT_ASTC_8x5_SRGB = 0x00000085,
- XGL_FMT_ASTC_8x6_UNORM = 0x00000086,
- XGL_FMT_ASTC_8x6_SRGB = 0x00000087,
- XGL_FMT_ASTC_8x8_UNORM = 0x00000088,
- XGL_FMT_ASTC_8x8_SRGB = 0x00000089,
- XGL_FMT_ASTC_10x5_UNORM = 0x0000008A,
- XGL_FMT_ASTC_10x5_SRGB = 0x0000008B,
- XGL_FMT_ASTC_10x6_UNORM = 0x0000008C,
- XGL_FMT_ASTC_10x6_SRGB = 0x0000008D,
- XGL_FMT_ASTC_10x8_UNORM = 0x0000008E,
- XGL_FMT_ASTC_10x8_SRGB = 0x0000008F,
- XGL_FMT_ASTC_10x10_UNORM = 0x00000090,
- XGL_FMT_ASTC_10x10_SRGB = 0x00000091,
- XGL_FMT_ASTC_12x10_UNORM = 0x00000092,
- XGL_FMT_ASTC_12x10_SRGB = 0x00000093,
- XGL_FMT_ASTC_12x12_UNORM = 0x00000094,
- XGL_FMT_ASTC_12x12_SRGB = 0x00000095,
- XGL_FMT_B4G4R4A4_UNORM = 0x00000096,
- XGL_FMT_B5G5R5A1_UNORM = 0x00000097,
- XGL_FMT_B5G6R5_UNORM = 0x00000098,
- XGL_FMT_B5G6R5_USCALED = 0x00000099,
- XGL_FMT_B8G8R8_UNORM = 0x0000009A,
- XGL_FMT_B8G8R8_SNORM = 0x0000009B,
- XGL_FMT_B8G8R8_USCALED = 0x0000009C,
- XGL_FMT_B8G8R8_SSCALED = 0x0000009D,
- XGL_FMT_B8G8R8_UINT = 0x0000009E,
- XGL_FMT_B8G8R8_SINT = 0x0000009F,
- XGL_FMT_B8G8R8_SRGB = 0x000000A0,
- XGL_FMT_B8G8R8A8_UNORM = 0x000000A1,
- XGL_FMT_B8G8R8A8_SNORM = 0x000000A2,
- XGL_FMT_B8G8R8A8_USCALED = 0x000000A3,
- XGL_FMT_B8G8R8A8_SSCALED = 0x000000A4,
- XGL_FMT_B8G8R8A8_UINT = 0x000000A5,
- XGL_FMT_B8G8R8A8_SINT = 0x000000A6,
- XGL_FMT_B8G8R8A8_SRGB = 0x000000A7,
- XGL_FMT_B10G10R10A2_UNORM = 0x000000A8,
- XGL_FMT_B10G10R10A2_SNORM = 0x000000A9,
- XGL_FMT_B10G10R10A2_USCALED = 0x000000AA,
- XGL_FMT_B10G10R10A2_SSCALED = 0x000000AB,
- XGL_FMT_B10G10R10A2_UINT = 0x000000AC,
- XGL_FMT_B10G10R10A2_SINT = 0x000000AD,
-
- XGL_FMT_BEGIN_RANGE = XGL_FMT_UNDEFINED,
- XGL_FMT_END_RANGE = XGL_FMT_B10G10R10A2_SINT,
- XGL_NUM_FMT = (XGL_FMT_END_RANGE - XGL_FMT_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_FORMAT)
-} XGL_FORMAT;
+// VK format definitions
+
+typedef enum _VK_VERTEX_INPUT_STEP_RATE
+{
+ VK_VERTEX_INPUT_STEP_RATE_VERTEX = 0x0,
+ VK_VERTEX_INPUT_STEP_RATE_INSTANCE = 0x1,
+ VK_VERTEX_INPUT_STEP_RATE_DRAW = 0x2, //Optional
+
+ VK_VERTEX_INPUT_STEP_RATE_BEGIN_RANGE = VK_VERTEX_INPUT_STEP_RATE_VERTEX,
+ VK_VERTEX_INPUT_STEP_RATE_END_RANGE = VK_VERTEX_INPUT_STEP_RATE_DRAW,
+ VK_NUM_VERTEX_INPUT_STEP_RATE = (VK_VERTEX_INPUT_STEP_RATE_END_RANGE - VK_VERTEX_INPUT_STEP_RATE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_VERTEX_INPUT_STEP_RATE)
+} VK_VERTEX_INPUT_STEP_RATE;
+
+typedef enum _VK_FORMAT
+{
+ VK_FMT_UNDEFINED = 0x00000000,
+ VK_FMT_R4G4_UNORM = 0x00000001,
+ VK_FMT_R4G4_USCALED = 0x00000002,
+ VK_FMT_R4G4B4A4_UNORM = 0x00000003,
+ VK_FMT_R4G4B4A4_USCALED = 0x00000004,
+ VK_FMT_R5G6B5_UNORM = 0x00000005,
+ VK_FMT_R5G6B5_USCALED = 0x00000006,
+ VK_FMT_R5G5B5A1_UNORM = 0x00000007,
+ VK_FMT_R5G5B5A1_USCALED = 0x00000008,
+ VK_FMT_R8_UNORM = 0x00000009,
+ VK_FMT_R8_SNORM = 0x0000000A,
+ VK_FMT_R8_USCALED = 0x0000000B,
+ VK_FMT_R8_SSCALED = 0x0000000C,
+ VK_FMT_R8_UINT = 0x0000000D,
+ VK_FMT_R8_SINT = 0x0000000E,
+ VK_FMT_R8_SRGB = 0x0000000F,
+ VK_FMT_R8G8_UNORM = 0x00000010,
+ VK_FMT_R8G8_SNORM = 0x00000011,
+ VK_FMT_R8G8_USCALED = 0x00000012,
+ VK_FMT_R8G8_SSCALED = 0x00000013,
+ VK_FMT_R8G8_UINT = 0x00000014,
+ VK_FMT_R8G8_SINT = 0x00000015,
+ VK_FMT_R8G8_SRGB = 0x00000016,
+ VK_FMT_R8G8B8_UNORM = 0x00000017,
+ VK_FMT_R8G8B8_SNORM = 0x00000018,
+ VK_FMT_R8G8B8_USCALED = 0x00000019,
+ VK_FMT_R8G8B8_SSCALED = 0x0000001A,
+ VK_FMT_R8G8B8_UINT = 0x0000001B,
+ VK_FMT_R8G8B8_SINT = 0x0000001C,
+ VK_FMT_R8G8B8_SRGB = 0x0000001D,
+ VK_FMT_R8G8B8A8_UNORM = 0x0000001E,
+ VK_FMT_R8G8B8A8_SNORM = 0x0000001F,
+ VK_FMT_R8G8B8A8_USCALED = 0x00000020,
+ VK_FMT_R8G8B8A8_SSCALED = 0x00000021,
+ VK_FMT_R8G8B8A8_UINT = 0x00000022,
+ VK_FMT_R8G8B8A8_SINT = 0x00000023,
+ VK_FMT_R8G8B8A8_SRGB = 0x00000024,
+ VK_FMT_R10G10B10A2_UNORM = 0x00000025,
+ VK_FMT_R10G10B10A2_SNORM = 0x00000026,
+ VK_FMT_R10G10B10A2_USCALED = 0x00000027,
+ VK_FMT_R10G10B10A2_SSCALED = 0x00000028,
+ VK_FMT_R10G10B10A2_UINT = 0x00000029,
+ VK_FMT_R10G10B10A2_SINT = 0x0000002A,
+ VK_FMT_R16_UNORM = 0x0000002B,
+ VK_FMT_R16_SNORM = 0x0000002C,
+ VK_FMT_R16_USCALED = 0x0000002D,
+ VK_FMT_R16_SSCALED = 0x0000002E,
+ VK_FMT_R16_UINT = 0x0000002F,
+ VK_FMT_R16_SINT = 0x00000030,
+ VK_FMT_R16_SFLOAT = 0x00000031,
+ VK_FMT_R16G16_UNORM = 0x00000032,
+ VK_FMT_R16G16_SNORM = 0x00000033,
+ VK_FMT_R16G16_USCALED = 0x00000034,
+ VK_FMT_R16G16_SSCALED = 0x00000035,
+ VK_FMT_R16G16_UINT = 0x00000036,
+ VK_FMT_R16G16_SINT = 0x00000037,
+ VK_FMT_R16G16_SFLOAT = 0x00000038,
+ VK_FMT_R16G16B16_UNORM = 0x00000039,
+ VK_FMT_R16G16B16_SNORM = 0x0000003A,
+ VK_FMT_R16G16B16_USCALED = 0x0000003B,
+ VK_FMT_R16G16B16_SSCALED = 0x0000003C,
+ VK_FMT_R16G16B16_UINT = 0x0000003D,
+ VK_FMT_R16G16B16_SINT = 0x0000003E,
+ VK_FMT_R16G16B16_SFLOAT = 0x0000003F,
+ VK_FMT_R16G16B16A16_UNORM = 0x00000040,
+ VK_FMT_R16G16B16A16_SNORM = 0x00000041,
+ VK_FMT_R16G16B16A16_USCALED = 0x00000042,
+ VK_FMT_R16G16B16A16_SSCALED = 0x00000043,
+ VK_FMT_R16G16B16A16_UINT = 0x00000044,
+ VK_FMT_R16G16B16A16_SINT = 0x00000045,
+ VK_FMT_R16G16B16A16_SFLOAT = 0x00000046,
+ VK_FMT_R32_UINT = 0x00000047,
+ VK_FMT_R32_SINT = 0x00000048,
+ VK_FMT_R32_SFLOAT = 0x00000049,
+ VK_FMT_R32G32_UINT = 0x0000004A,
+ VK_FMT_R32G32_SINT = 0x0000004B,
+ VK_FMT_R32G32_SFLOAT = 0x0000004C,
+ VK_FMT_R32G32B32_UINT = 0x0000004D,
+ VK_FMT_R32G32B32_SINT = 0x0000004E,
+ VK_FMT_R32G32B32_SFLOAT = 0x0000004F,
+ VK_FMT_R32G32B32A32_UINT = 0x00000050,
+ VK_FMT_R32G32B32A32_SINT = 0x00000051,
+ VK_FMT_R32G32B32A32_SFLOAT = 0x00000052,
+ VK_FMT_R64_SFLOAT = 0x00000053,
+ VK_FMT_R64G64_SFLOAT = 0x00000054,
+ VK_FMT_R64G64B64_SFLOAT = 0x00000055,
+ VK_FMT_R64G64B64A64_SFLOAT = 0x00000056,
+ VK_FMT_R11G11B10_UFLOAT = 0x00000057,
+ VK_FMT_R9G9B9E5_UFLOAT = 0x00000058,
+ VK_FMT_D16_UNORM = 0x00000059,
+ VK_FMT_D24_UNORM = 0x0000005A,
+ VK_FMT_D32_SFLOAT = 0x0000005B,
+ VK_FMT_S8_UINT = 0x0000005C,
+ VK_FMT_D16_UNORM_S8_UINT = 0x0000005D,
+ VK_FMT_D24_UNORM_S8_UINT = 0x0000005E,
+ VK_FMT_D32_SFLOAT_S8_UINT = 0x0000005F,
+ VK_FMT_BC1_RGB_UNORM = 0x00000060,
+ VK_FMT_BC1_RGB_SRGB = 0x00000061,
+ VK_FMT_BC1_RGBA_UNORM = 0x00000062,
+ VK_FMT_BC1_RGBA_SRGB = 0x00000063,
+ VK_FMT_BC2_UNORM = 0x00000064,
+ VK_FMT_BC2_SRGB = 0x00000065,
+ VK_FMT_BC3_UNORM = 0x00000066,
+ VK_FMT_BC3_SRGB = 0x00000067,
+ VK_FMT_BC4_UNORM = 0x00000068,
+ VK_FMT_BC4_SNORM = 0x00000069,
+ VK_FMT_BC5_UNORM = 0x0000006A,
+ VK_FMT_BC5_SNORM = 0x0000006B,
+ VK_FMT_BC6H_UFLOAT = 0x0000006C,
+ VK_FMT_BC6H_SFLOAT = 0x0000006D,
+ VK_FMT_BC7_UNORM = 0x0000006E,
+ VK_FMT_BC7_SRGB = 0x0000006F,
+ VK_FMT_ETC2_R8G8B8_UNORM = 0x00000070,
+ VK_FMT_ETC2_R8G8B8_SRGB = 0x00000071,
+ VK_FMT_ETC2_R8G8B8A1_UNORM = 0x00000072,
+ VK_FMT_ETC2_R8G8B8A1_SRGB = 0x00000073,
+ VK_FMT_ETC2_R8G8B8A8_UNORM = 0x00000074,
+ VK_FMT_ETC2_R8G8B8A8_SRGB = 0x00000075,
+ VK_FMT_EAC_R11_UNORM = 0x00000076,
+ VK_FMT_EAC_R11_SNORM = 0x00000077,
+ VK_FMT_EAC_R11G11_UNORM = 0x00000078,
+ VK_FMT_EAC_R11G11_SNORM = 0x00000079,
+ VK_FMT_ASTC_4x4_UNORM = 0x0000007A,
+ VK_FMT_ASTC_4x4_SRGB = 0x0000007B,
+ VK_FMT_ASTC_5x4_UNORM = 0x0000007C,
+ VK_FMT_ASTC_5x4_SRGB = 0x0000007D,
+ VK_FMT_ASTC_5x5_UNORM = 0x0000007E,
+ VK_FMT_ASTC_5x5_SRGB = 0x0000007F,
+ VK_FMT_ASTC_6x5_UNORM = 0x00000080,
+ VK_FMT_ASTC_6x5_SRGB = 0x00000081,
+ VK_FMT_ASTC_6x6_UNORM = 0x00000082,
+ VK_FMT_ASTC_6x6_SRGB = 0x00000083,
+ VK_FMT_ASTC_8x5_UNORM = 0x00000084,
+ VK_FMT_ASTC_8x5_SRGB = 0x00000085,
+ VK_FMT_ASTC_8x6_UNORM = 0x00000086,
+ VK_FMT_ASTC_8x6_SRGB = 0x00000087,
+ VK_FMT_ASTC_8x8_UNORM = 0x00000088,
+ VK_FMT_ASTC_8x8_SRGB = 0x00000089,
+ VK_FMT_ASTC_10x5_UNORM = 0x0000008A,
+ VK_FMT_ASTC_10x5_SRGB = 0x0000008B,
+ VK_FMT_ASTC_10x6_UNORM = 0x0000008C,
+ VK_FMT_ASTC_10x6_SRGB = 0x0000008D,
+ VK_FMT_ASTC_10x8_UNORM = 0x0000008E,
+ VK_FMT_ASTC_10x8_SRGB = 0x0000008F,
+ VK_FMT_ASTC_10x10_UNORM = 0x00000090,
+ VK_FMT_ASTC_10x10_SRGB = 0x00000091,
+ VK_FMT_ASTC_12x10_UNORM = 0x00000092,
+ VK_FMT_ASTC_12x10_SRGB = 0x00000093,
+ VK_FMT_ASTC_12x12_UNORM = 0x00000094,
+ VK_FMT_ASTC_12x12_SRGB = 0x00000095,
+ VK_FMT_B4G4R4A4_UNORM = 0x00000096,
+ VK_FMT_B5G5R5A1_UNORM = 0x00000097,
+ VK_FMT_B5G6R5_UNORM = 0x00000098,
+ VK_FMT_B5G6R5_USCALED = 0x00000099,
+ VK_FMT_B8G8R8_UNORM = 0x0000009A,
+ VK_FMT_B8G8R8_SNORM = 0x0000009B,
+ VK_FMT_B8G8R8_USCALED = 0x0000009C,
+ VK_FMT_B8G8R8_SSCALED = 0x0000009D,
+ VK_FMT_B8G8R8_UINT = 0x0000009E,
+ VK_FMT_B8G8R8_SINT = 0x0000009F,
+ VK_FMT_B8G8R8_SRGB = 0x000000A0,
+ VK_FMT_B8G8R8A8_UNORM = 0x000000A1,
+ VK_FMT_B8G8R8A8_SNORM = 0x000000A2,
+ VK_FMT_B8G8R8A8_USCALED = 0x000000A3,
+ VK_FMT_B8G8R8A8_SSCALED = 0x000000A4,
+ VK_FMT_B8G8R8A8_UINT = 0x000000A5,
+ VK_FMT_B8G8R8A8_SINT = 0x000000A6,
+ VK_FMT_B8G8R8A8_SRGB = 0x000000A7,
+ VK_FMT_B10G10R10A2_UNORM = 0x000000A8,
+ VK_FMT_B10G10R10A2_SNORM = 0x000000A9,
+ VK_FMT_B10G10R10A2_USCALED = 0x000000AA,
+ VK_FMT_B10G10R10A2_SSCALED = 0x000000AB,
+ VK_FMT_B10G10R10A2_UINT = 0x000000AC,
+ VK_FMT_B10G10R10A2_SINT = 0x000000AD,
+
+ VK_FMT_BEGIN_RANGE = VK_FMT_UNDEFINED,
+ VK_FMT_END_RANGE = VK_FMT_B10G10R10A2_SINT,
+ VK_NUM_FMT = (VK_FMT_END_RANGE - VK_FMT_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_FORMAT)
+} VK_FORMAT;
// Shader stage enumerant
-typedef enum _XGL_PIPELINE_SHADER_STAGE
-{
- XGL_SHADER_STAGE_VERTEX = 0,
- XGL_SHADER_STAGE_TESS_CONTROL = 1,
- XGL_SHADER_STAGE_TESS_EVALUATION = 2,
- XGL_SHADER_STAGE_GEOMETRY = 3,
- XGL_SHADER_STAGE_FRAGMENT = 4,
- XGL_SHADER_STAGE_COMPUTE = 5,
-
- XGL_SHADER_STAGE_BEGIN_RANGE = XGL_SHADER_STAGE_VERTEX,
- XGL_SHADER_STAGE_END_RANGE = XGL_SHADER_STAGE_COMPUTE,
- XGL_NUM_SHADER_STAGE = (XGL_SHADER_STAGE_END_RANGE - XGL_SHADER_STAGE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_PIPELINE_SHADER_STAGE)
-} XGL_PIPELINE_SHADER_STAGE;
-
-typedef enum _XGL_SHADER_STAGE_FLAGS
-{
- XGL_SHADER_STAGE_FLAGS_VERTEX_BIT = 0x00000001,
- XGL_SHADER_STAGE_FLAGS_TESS_CONTROL_BIT = 0x00000002,
- XGL_SHADER_STAGE_FLAGS_TESS_EVALUATION_BIT = 0x00000004,
- XGL_SHADER_STAGE_FLAGS_GEOMETRY_BIT = 0x00000008,
- XGL_SHADER_STAGE_FLAGS_FRAGMENT_BIT = 0x00000010,
- XGL_SHADER_STAGE_FLAGS_COMPUTE_BIT = 0x00000020,
-
- XGL_SHADER_STAGE_FLAGS_ALL = 0x7FFFFFFF,
- XGL_MAX_ENUM(_XGL_SHADER_STAGE_FLAGS)
-} XGL_SHADER_STAGE_FLAGS;
+typedef enum _VK_PIPELINE_SHADER_STAGE
+{
+ VK_SHADER_STAGE_VERTEX = 0,
+ VK_SHADER_STAGE_TESS_CONTROL = 1,
+ VK_SHADER_STAGE_TESS_EVALUATION = 2,
+ VK_SHADER_STAGE_GEOMETRY = 3,
+ VK_SHADER_STAGE_FRAGMENT = 4,
+ VK_SHADER_STAGE_COMPUTE = 5,
+
+ VK_SHADER_STAGE_BEGIN_RANGE = VK_SHADER_STAGE_VERTEX,
+ VK_SHADER_STAGE_END_RANGE = VK_SHADER_STAGE_COMPUTE,
+ VK_NUM_SHADER_STAGE = (VK_SHADER_STAGE_END_RANGE - VK_SHADER_STAGE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_PIPELINE_SHADER_STAGE)
+} VK_PIPELINE_SHADER_STAGE;
+
+typedef enum _VK_SHADER_STAGE_FLAGS
+{
+ VK_SHADER_STAGE_FLAGS_VERTEX_BIT = 0x00000001,
+ VK_SHADER_STAGE_FLAGS_TESS_CONTROL_BIT = 0x00000002,
+ VK_SHADER_STAGE_FLAGS_TESS_EVALUATION_BIT = 0x00000004,
+ VK_SHADER_STAGE_FLAGS_GEOMETRY_BIT = 0x00000008,
+ VK_SHADER_STAGE_FLAGS_FRAGMENT_BIT = 0x00000010,
+ VK_SHADER_STAGE_FLAGS_COMPUTE_BIT = 0x00000020,
+
+ VK_SHADER_STAGE_FLAGS_ALL = 0x7FFFFFFF,
+ VK_MAX_ENUM(_VK_SHADER_STAGE_FLAGS)
+} VK_SHADER_STAGE_FLAGS;
// Structure type enumerant
-typedef enum _XGL_STRUCTURE_TYPE
-{
- XGL_STRUCTURE_TYPE_APPLICATION_INFO = 0,
- XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO = 1,
- XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO = 2,
- XGL_STRUCTURE_TYPE_MEMORY_OPEN_INFO = 4,
- XGL_STRUCTURE_TYPE_PEER_MEMORY_OPEN_INFO = 5,
- XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO = 6,
- XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO = 7,
- XGL_STRUCTURE_TYPE_EVENT_WAIT_INFO = 8,
- XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO = 9,
- XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO = 10,
- XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO = 11,
- XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO = 12,
- XGL_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO = 13,
- XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO = 14,
- XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO = 15,
- XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO = 16,
- XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO = 17,
- XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO = 18,
- XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO = 19,
- XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO = 20,
- XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO = 21,
- XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO = 22,
- XGL_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO = 23,
- XGL_STRUCTURE_TYPE_SEMAPHORE_OPEN_INFO = 24,
- XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO = 25,
- XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO = 26,
- XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO = 27,
- XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO = 28,
- XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO = 29,
- XGL_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO = 30,
- XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO = 31,
- XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO = 32,
- XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO = 33,
- XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO = 34,
- XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO = 35,
- XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO = 36,
- XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO = 37,
- XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO = 38,
- XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO = 39,
- XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO = 40,
- XGL_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO = 41,
- XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO = 42,
- XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO = 43,
- XGL_STRUCTURE_TYPE_PIPELINE_BARRIER = 44,
- XGL_STRUCTURE_TYPE_MEMORY_BARRIER = 45,
- XGL_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER = 46,
- XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER = 47,
- XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO = 48,
- XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS = 49,
- XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES = 50,
- XGL_STRUCTURE_TYPE_UPDATE_IMAGES = 51,
- XGL_STRUCTURE_TYPE_UPDATE_BUFFERS = 52,
- XGL_STRUCTURE_TYPE_UPDATE_AS_COPY = 53,
- XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO = 54,
- XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO = 55,
- XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO = 56,
- XGL_STRUCTURE_TYPE_BEGIN_RANGE = XGL_STRUCTURE_TYPE_APPLICATION_INFO,
- XGL_STRUCTURE_TYPE_END_RANGE = XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
- XGL_NUM_STRUCTURE_TYPE = (XGL_STRUCTURE_TYPE_END_RANGE - XGL_STRUCTURE_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_STRUCTURE_TYPE)
-} XGL_STRUCTURE_TYPE;
+typedef enum _VK_STRUCTURE_TYPE
+{
+ VK_STRUCTURE_TYPE_APPLICATION_INFO = 0,
+ VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO = 1,
+ VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO = 2,
+ VK_STRUCTURE_TYPE_MEMORY_OPEN_INFO = 4,
+ VK_STRUCTURE_TYPE_PEER_MEMORY_OPEN_INFO = 5,
+ VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO = 6,
+ VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO = 7,
+ VK_STRUCTURE_TYPE_EVENT_WAIT_INFO = 8,
+ VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO = 9,
+ VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO = 10,
+ VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO = 11,
+ VK_STRUCTURE_TYPE_SHADER_CREATE_INFO = 12,
+ VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO = 13,
+ VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO = 14,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO = 15,
+ VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO = 16,
+ VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO = 17,
+ VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO = 18,
+ VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO = 19,
+ VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO = 20,
+ VK_STRUCTURE_TYPE_EVENT_CREATE_INFO = 21,
+ VK_STRUCTURE_TYPE_FENCE_CREATE_INFO = 22,
+ VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO = 23,
+ VK_STRUCTURE_TYPE_SEMAPHORE_OPEN_INFO = 24,
+ VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO = 25,
+ VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO = 26,
+ VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO = 27,
+ VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO = 28,
+ VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO = 29,
+ VK_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO = 30,
+ VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO = 31,
+ VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO = 32,
+ VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO = 33,
+ VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO = 34,
+ VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO = 35,
+ VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO = 36,
+ VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO = 37,
+ VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO = 38,
+ VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO = 39,
+ VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO = 40,
+ VK_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO = 41,
+ VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO = 42,
+ VK_STRUCTURE_TYPE_LAYER_CREATE_INFO = 43,
+ VK_STRUCTURE_TYPE_PIPELINE_BARRIER = 44,
+ VK_STRUCTURE_TYPE_MEMORY_BARRIER = 45,
+ VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER = 46,
+ VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER = 47,
+ VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO = 48,
+ VK_STRUCTURE_TYPE_UPDATE_SAMPLERS = 49,
+ VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES = 50,
+ VK_STRUCTURE_TYPE_UPDATE_IMAGES = 51,
+ VK_STRUCTURE_TYPE_UPDATE_BUFFERS = 52,
+ VK_STRUCTURE_TYPE_UPDATE_AS_COPY = 53,
+ VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO = 54,
+ VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO = 55,
+ VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO = 56,
+ VK_STRUCTURE_TYPE_BEGIN_RANGE = VK_STRUCTURE_TYPE_APPLICATION_INFO,
+ VK_STRUCTURE_TYPE_END_RANGE = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO,
+ VK_NUM_STRUCTURE_TYPE = (VK_STRUCTURE_TYPE_END_RANGE - VK_STRUCTURE_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_STRUCTURE_TYPE)
+} VK_STRUCTURE_TYPE;
// ------------------------------------------------------------------------------------------------
// Flags
// Device creation flags
-typedef enum _XGL_DEVICE_CREATE_FLAGS
+typedef enum _VK_DEVICE_CREATE_FLAGS
{
- XGL_DEVICE_CREATE_VALIDATION_BIT = 0x00000001,
- XGL_DEVICE_CREATE_MGPU_IQ_MATCH_BIT = 0x00000002,
- XGL_MAX_ENUM(_XGL_DEVICE_CREATE_FLAGS)
-} XGL_DEVICE_CREATE_FLAGS;
+ VK_DEVICE_CREATE_VALIDATION_BIT = 0x00000001,
+ VK_DEVICE_CREATE_MGPU_IQ_MATCH_BIT = 0x00000002,
+ VK_MAX_ENUM(_VK_DEVICE_CREATE_FLAGS)
+} VK_DEVICE_CREATE_FLAGS;
// Queue capabilities
-typedef enum _XGL_QUEUE_FLAGS
-{
- XGL_QUEUE_GRAPHICS_BIT = 0x00000001, // Queue supports graphics operations
- XGL_QUEUE_COMPUTE_BIT = 0x00000002, // Queue supports compute operations
- XGL_QUEUE_DMA_BIT = 0x00000004, // Queue supports DMA operations
- XGL_QUEUE_EXTENDED_BIT = 0x40000000, // Extended queue
- XGL_MAX_ENUM(_XGL_QUEUE_FLAGS)
-} XGL_QUEUE_FLAGS;
-
-// memory properties passed into xglAllocMemory().
-typedef enum _XGL_MEMORY_PROPERTY_FLAGS
-{
- XGL_MEMORY_PROPERTY_GPU_ONLY = 0x00000000, // If not set, then allocate memory on device (GPU)
- XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT = 0x00000001,
- XGL_MEMORY_PROPERTY_CPU_GPU_COHERENT_BIT = 0x00000002,
- XGL_MEMORY_PROPERTY_CPU_UNCACHED_BIT = 0x00000004,
- XGL_MEMORY_PROPERTY_CPU_WRITE_COMBINED_BIT = 0x00000008,
- XGL_MEMORY_PROPERTY_PREFER_CPU_LOCAL = 0x00000010, // all else being equal, prefer CPU access
- XGL_MEMORY_PROPERTY_SHAREABLE_BIT = 0x00000020,
- XGL_MAX_ENUM(_XGL_MEMORY_PROPERTY_FLAGS)
-} XGL_MEMORY_PROPERTY_FLAGS;
-
-typedef enum _XGL_MEMORY_TYPE
-{
- XGL_MEMORY_TYPE_OTHER = 0x00000000, // device memory that is not any of the others
- XGL_MEMORY_TYPE_BUFFER = 0x00000001, // memory for buffers and associated information
- XGL_MEMORY_TYPE_IMAGE = 0x00000002, // memory for images and associated information
-
- XGL_MEMORY_TYPE_BEGIN_RANGE = XGL_MEMORY_TYPE_OTHER,
- XGL_MEMORY_TYPE_END_RANGE = XGL_MEMORY_TYPE_IMAGE,
- XGL_NUM_MEMORY_TYPE = (XGL_MEMORY_TYPE_END_RANGE - XGL_MEMORY_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_MEMORY_TYPE)
-} XGL_MEMORY_TYPE;
+typedef enum _VK_QUEUE_FLAGS
+{
+ VK_QUEUE_GRAPHICS_BIT = 0x00000001, // Queue supports graphics operations
+ VK_QUEUE_COMPUTE_BIT = 0x00000002, // Queue supports compute operations
+ VK_QUEUE_DMA_BIT = 0x00000004, // Queue supports DMA operations
+ VK_QUEUE_EXTENDED_BIT = 0x40000000, // Extended queue
+ VK_MAX_ENUM(_VK_QUEUE_FLAGS)
+} VK_QUEUE_FLAGS;
+
+// memory properties passed into vkAllocMemory().
+typedef enum _VK_MEMORY_PROPERTY_FLAGS
+{
+ VK_MEMORY_PROPERTY_GPU_ONLY = 0x00000000, // If not set, then allocate memory on device (GPU)
+ VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT = 0x00000001,
+ VK_MEMORY_PROPERTY_CPU_GPU_COHERENT_BIT = 0x00000002,
+ VK_MEMORY_PROPERTY_CPU_UNCACHED_BIT = 0x00000004,
+ VK_MEMORY_PROPERTY_CPU_WRITE_COMBINED_BIT = 0x00000008,
+ VK_MEMORY_PROPERTY_PREFER_CPU_LOCAL = 0x00000010, // all else being equal, prefer CPU access
+ VK_MEMORY_PROPERTY_SHAREABLE_BIT = 0x00000020,
+ VK_MAX_ENUM(_VK_MEMORY_PROPERTY_FLAGS)
+} VK_MEMORY_PROPERTY_FLAGS;
+
+typedef enum _VK_MEMORY_TYPE
+{
+ VK_MEMORY_TYPE_OTHER = 0x00000000, // device memory that is not any of the others
+ VK_MEMORY_TYPE_BUFFER = 0x00000001, // memory for buffers and associated information
+ VK_MEMORY_TYPE_IMAGE = 0x00000002, // memory for images and associated information
+
+ VK_MEMORY_TYPE_BEGIN_RANGE = VK_MEMORY_TYPE_OTHER,
+ VK_MEMORY_TYPE_END_RANGE = VK_MEMORY_TYPE_IMAGE,
+ VK_NUM_MEMORY_TYPE = (VK_MEMORY_TYPE_END_RANGE - VK_MEMORY_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_MEMORY_TYPE)
+} VK_MEMORY_TYPE;
// Buffer and buffer allocation usage flags
-typedef enum _XGL_BUFFER_USAGE_FLAGS
-{
- XGL_BUFFER_USAGE_GENERAL = 0x00000000, // no special usage
- XGL_BUFFER_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001, // Shader read (e.g. TBO, image buffer, UBO, SSBO)
- XGL_BUFFER_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002, // Shader write (e.g. image buffer, SSBO)
- XGL_BUFFER_USAGE_SHADER_ACCESS_ATOMIC_BIT = 0x00000004, // Shader atomic operations (e.g. image buffer, SSBO)
- XGL_BUFFER_USAGE_TRANSFER_SOURCE_BIT = 0x00000008, // used as a source for copies
- XGL_BUFFER_USAGE_TRANSFER_DESTINATION_BIT = 0x00000010, // used as a destination for copies
- XGL_BUFFER_USAGE_UNIFORM_READ_BIT = 0x00000020, // Uniform read (UBO)
- XGL_BUFFER_USAGE_INDEX_FETCH_BIT = 0x00000040, // Fixed function index fetch (index buffer)
- XGL_BUFFER_USAGE_VERTEX_FETCH_BIT = 0x00000080, // Fixed function vertex fetch (VBO)
- XGL_BUFFER_USAGE_SHADER_STORAGE_BIT = 0x00000100, // Shader storage buffer (SSBO)
- XGL_BUFFER_USAGE_INDIRECT_PARAMETER_FETCH_BIT = 0x00000200, // Can be the source of indirect parameters (e.g. indirect buffer, parameter buffer)
- XGL_BUFFER_USAGE_TEXTURE_BUFFER_BIT = 0x00000400, // texture buffer (TBO)
- XGL_BUFFER_USAGE_IMAGE_BUFFER_BIT = 0x00000800, // image buffer (load/store)
- XGL_MAX_ENUM(_XGL_BUFFER_USAGE_FLAGS)
-} XGL_BUFFER_USAGE_FLAGS;
+typedef enum _VK_BUFFER_USAGE_FLAGS
+{
+ VK_BUFFER_USAGE_GENERAL = 0x00000000, // no special usage
+ VK_BUFFER_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001, // Shader read (e.g. TBO, image buffer, UBO, SSBO)
+ VK_BUFFER_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002, // Shader write (e.g. image buffer, SSBO)
+ VK_BUFFER_USAGE_SHADER_ACCESS_ATOMIC_BIT = 0x00000004, // Shader atomic operations (e.g. image buffer, SSBO)
+ VK_BUFFER_USAGE_TRANSFER_SOURCE_BIT = 0x00000008, // used as a source for copies
+ VK_BUFFER_USAGE_TRANSFER_DESTINATION_BIT = 0x00000010, // used as a destination for copies
+ VK_BUFFER_USAGE_UNIFORM_READ_BIT = 0x00000020, // Uniform read (UBO)
+ VK_BUFFER_USAGE_INDEX_FETCH_BIT = 0x00000040, // Fixed function index fetch (index buffer)
+ VK_BUFFER_USAGE_VERTEX_FETCH_BIT = 0x00000080, // Fixed function vertex fetch (VBO)
+ VK_BUFFER_USAGE_SHADER_STORAGE_BIT = 0x00000100, // Shader storage buffer (SSBO)
+ VK_BUFFER_USAGE_INDIRECT_PARAMETER_FETCH_BIT = 0x00000200, // Can be the source of indirect parameters (e.g. indirect buffer, parameter buffer)
+ VK_BUFFER_USAGE_TEXTURE_BUFFER_BIT = 0x00000400, // texture buffer (TBO)
+ VK_BUFFER_USAGE_IMAGE_BUFFER_BIT = 0x00000800, // image buffer (load/store)
+ VK_MAX_ENUM(_VK_BUFFER_USAGE_FLAGS)
+} VK_BUFFER_USAGE_FLAGS;
// Buffer flags
-typedef enum _XGL_BUFFER_CREATE_FLAGS
+typedef enum _VK_BUFFER_CREATE_FLAGS
{
- XGL_BUFFER_CREATE_SHAREABLE_BIT = 0x00000001,
- XGL_BUFFER_CREATE_SPARSE_BIT = 0x00000002,
- XGL_MAX_ENUM(_XGL_BUFFER_CREATE_FLAGS)
-} XGL_BUFFER_CREATE_FLAGS;
+ VK_BUFFER_CREATE_SHAREABLE_BIT = 0x00000001,
+ VK_BUFFER_CREATE_SPARSE_BIT = 0x00000002,
+ VK_MAX_ENUM(_VK_BUFFER_CREATE_FLAGS)
+} VK_BUFFER_CREATE_FLAGS;
-typedef enum _XGL_BUFFER_VIEW_TYPE
+typedef enum _VK_BUFFER_VIEW_TYPE
{
- XGL_BUFFER_VIEW_RAW = 0x00000000, // Raw buffer without special structure (e.g. UBO, SSBO, indirect and parameter buffers)
- XGL_BUFFER_VIEW_TYPED = 0x00000001, // Typed buffer, format and channels are used (TBO, image buffer)
+ VK_BUFFER_VIEW_RAW = 0x00000000, // Raw buffer without special structure (e.g. UBO, SSBO, indirect and parameter buffers)
+ VK_BUFFER_VIEW_TYPED = 0x00000001, // Typed buffer, format and channels are used (TBO, image buffer)
- XGL_BUFFER_VIEW_TYPE_BEGIN_RANGE = XGL_BUFFER_VIEW_RAW,
- XGL_BUFFER_VIEW_TYPE_END_RANGE = XGL_BUFFER_VIEW_TYPED,
- XGL_NUM_BUFFER_VIEW_TYPE = (XGL_BUFFER_VIEW_TYPE_END_RANGE - XGL_BUFFER_VIEW_TYPE_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_BUFFER_VIEW_TYPE)
-} XGL_BUFFER_VIEW_TYPE;
+ VK_BUFFER_VIEW_TYPE_BEGIN_RANGE = VK_BUFFER_VIEW_RAW,
+ VK_BUFFER_VIEW_TYPE_END_RANGE = VK_BUFFER_VIEW_TYPED,
+ VK_NUM_BUFFER_VIEW_TYPE = (VK_BUFFER_VIEW_TYPE_END_RANGE - VK_BUFFER_VIEW_TYPE_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_BUFFER_VIEW_TYPE)
+} VK_BUFFER_VIEW_TYPE;
// Images memory allocations can be used for resources of a given format class.
-typedef enum _XGL_IMAGE_FORMAT_CLASS
-{
- XGL_IMAGE_FORMAT_CLASS_128_BITS = 1, // color formats
- XGL_IMAGE_FORMAT_CLASS_96_BITS = 2,
- XGL_IMAGE_FORMAT_CLASS_64_BITS = 3,
- XGL_IMAGE_FORMAT_CLASS_48_BITS = 4,
- XGL_IMAGE_FORMAT_CLASS_32_BITS = 5,
- XGL_IMAGE_FORMAT_CLASS_24_BITS = 6,
- XGL_IMAGE_FORMAT_CLASS_16_BITS = 7,
- XGL_IMAGE_FORMAT_CLASS_8_BITS = 8,
- XGL_IMAGE_FORMAT_CLASS_128_BIT_BLOCK = 9, // 128-bit block compressed formats
- XGL_IMAGE_FORMAT_CLASS_64_BIT_BLOCK = 10, // 64-bit block compressed formats
- XGL_IMAGE_FORMAT_CLASS_D32 = 11, // D32_SFLOAT
- XGL_IMAGE_FORMAT_CLASS_D24 = 12, // D24_UNORM
- XGL_IMAGE_FORMAT_CLASS_D16 = 13, // D16_UNORM
- XGL_IMAGE_FORMAT_CLASS_S8 = 14, // S8_UINT
- XGL_IMAGE_FORMAT_CLASS_D32S8 = 15, // D32_SFLOAT_S8_UINT
- XGL_IMAGE_FORMAT_CLASS_D24S8 = 16, // D24_UNORM_S8_UINT
- XGL_IMAGE_FORMAT_CLASS_D16S8 = 17, // D16_UNORM_S8_UINT
- XGL_IMAGE_FORMAT_CLASS_LINEAR = 18, // used for pitch-linear (transparent) textures
-
- XGL_IMAGE_FORMAT_CLASS_BEGIN_RANGE = XGL_IMAGE_FORMAT_CLASS_128_BITS,
- XGL_IMAGE_FORMAT_CLASS_END_RANGE = XGL_IMAGE_FORMAT_CLASS_LINEAR,
- XGL_NUM_IMAGE_FORMAT_CLASS = (XGL_IMAGE_FORMAT_CLASS_END_RANGE - XGL_IMAGE_FORMAT_CLASS_BEGIN_RANGE + 1),
- XGL_MAX_ENUM(_XGL_IMAGE_FORMAT_CLASS)
-} XGL_IMAGE_FORMAT_CLASS;
+typedef enum _VK_IMAGE_FORMAT_CLASS
+{
+ VK_IMAGE_FORMAT_CLASS_128_BITS = 1, // color formats
+ VK_IMAGE_FORMAT_CLASS_96_BITS = 2,
+ VK_IMAGE_FORMAT_CLASS_64_BITS = 3,
+ VK_IMAGE_FORMAT_CLASS_48_BITS = 4,
+ VK_IMAGE_FORMAT_CLASS_32_BITS = 5,
+ VK_IMAGE_FORMAT_CLASS_24_BITS = 6,
+ VK_IMAGE_FORMAT_CLASS_16_BITS = 7,
+ VK_IMAGE_FORMAT_CLASS_8_BITS = 8,
+ VK_IMAGE_FORMAT_CLASS_128_BIT_BLOCK = 9, // 128-bit block compressed formats
+ VK_IMAGE_FORMAT_CLASS_64_BIT_BLOCK = 10, // 64-bit block compressed formats
+ VK_IMAGE_FORMAT_CLASS_D32 = 11, // D32_SFLOAT
+ VK_IMAGE_FORMAT_CLASS_D24 = 12, // D24_UNORM
+ VK_IMAGE_FORMAT_CLASS_D16 = 13, // D16_UNORM
+ VK_IMAGE_FORMAT_CLASS_S8 = 14, // S8_UINT
+ VK_IMAGE_FORMAT_CLASS_D32S8 = 15, // D32_SFLOAT_S8_UINT
+ VK_IMAGE_FORMAT_CLASS_D24S8 = 16, // D24_UNORM_S8_UINT
+ VK_IMAGE_FORMAT_CLASS_D16S8 = 17, // D16_UNORM_S8_UINT
+ VK_IMAGE_FORMAT_CLASS_LINEAR = 18, // used for pitch-linear (transparent) textures
+
+ VK_IMAGE_FORMAT_CLASS_BEGIN_RANGE = VK_IMAGE_FORMAT_CLASS_128_BITS,
+ VK_IMAGE_FORMAT_CLASS_END_RANGE = VK_IMAGE_FORMAT_CLASS_LINEAR,
+ VK_NUM_IMAGE_FORMAT_CLASS = (VK_IMAGE_FORMAT_CLASS_END_RANGE - VK_IMAGE_FORMAT_CLASS_BEGIN_RANGE + 1),
+ VK_MAX_ENUM(_VK_IMAGE_FORMAT_CLASS)
+} VK_IMAGE_FORMAT_CLASS;
// Image and image allocation usage flags
-typedef enum _XGL_IMAGE_USAGE_FLAGS
-{
- XGL_IMAGE_USAGE_GENERAL = 0x00000000, // no special usage
- XGL_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001, // shader read (e.g. texture, image)
- XGL_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002, // shader write (e.g. image)
- XGL_IMAGE_USAGE_SHADER_ACCESS_ATOMIC_BIT = 0x00000004, // shader atomic operations (e.g. image)
- XGL_IMAGE_USAGE_TRANSFER_SOURCE_BIT = 0x00000008, // used as a source for copies
- XGL_IMAGE_USAGE_TRANSFER_DESTINATION_BIT = 0x00000010, // used as a destination for copies
- XGL_IMAGE_USAGE_TEXTURE_BIT = 0x00000020, // opaque texture (2d, 3d, etc.)
- XGL_IMAGE_USAGE_IMAGE_BIT = 0x00000040, // opaque image (2d, 3d, etc.)
- XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000080, // framebuffer color attachment
- XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000100, // framebuffer depth/stencil
- XGL_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT = 0x00000200, // image data not needed outside of rendering.
- XGL_MAX_ENUM(_XGL_IMAGE_USAGE_FLAGS)
-} XGL_IMAGE_USAGE_FLAGS;
+typedef enum _VK_IMAGE_USAGE_FLAGS
+{
+ VK_IMAGE_USAGE_GENERAL = 0x00000000, // no special usage
+ VK_IMAGE_USAGE_SHADER_ACCESS_READ_BIT = 0x00000001, // shader read (e.g. texture, image)
+ VK_IMAGE_USAGE_SHADER_ACCESS_WRITE_BIT = 0x00000002, // shader write (e.g. image)
+ VK_IMAGE_USAGE_SHADER_ACCESS_ATOMIC_BIT = 0x00000004, // shader atomic operations (e.g. image)
+ VK_IMAGE_USAGE_TRANSFER_SOURCE_BIT = 0x00000008, // used as a source for copies
+ VK_IMAGE_USAGE_TRANSFER_DESTINATION_BIT = 0x00000010, // used as a destination for copies
+ VK_IMAGE_USAGE_TEXTURE_BIT = 0x00000020, // opaque texture (2d, 3d, etc.)
+ VK_IMAGE_USAGE_IMAGE_BIT = 0x00000040, // opaque image (2d, 3d, etc.)
+ VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT = 0x00000080, // framebuffer color attachment
+ VK_IMAGE_USAGE_DEPTH_STENCIL_BIT = 0x00000100, // framebuffer depth/stencil
+ VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT = 0x00000200, // image data not needed outside of rendering.
+ VK_MAX_ENUM(_VK_IMAGE_USAGE_FLAGS)
+} VK_IMAGE_USAGE_FLAGS;
// Image flags
-typedef enum _XGL_IMAGE_CREATE_FLAGS
+typedef enum _VK_IMAGE_CREATE_FLAGS
{
- XGL_IMAGE_CREATE_INVARIANT_DATA_BIT = 0x00000001,
- XGL_IMAGE_CREATE_CLONEABLE_BIT = 0x00000002,
- XGL_IMAGE_CREATE_SHAREABLE_BIT = 0x00000004,
- XGL_IMAGE_CREATE_SPARSE_BIT = 0x00000008,
- XGL_IMAGE_CREATE_MUTABLE_FORMAT_BIT = 0x00000010, // Allows image views to have different format than the base image
- XGL_MAX_ENUM(_XGL_IMAGE_CREATE_FLAGS)
-} XGL_IMAGE_CREATE_FLAGS;
+ VK_IMAGE_CREATE_INVARIANT_DATA_BIT = 0x00000001,
+ VK_IMAGE_CREATE_CLONEABLE_BIT = 0x00000002,
+ VK_IMAGE_CREATE_SHAREABLE_BIT = 0x00000004,
+ VK_IMAGE_CREATE_SPARSE_BIT = 0x00000008,
+ VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT = 0x00000010, // Allows image views to have different format than the base image
+ VK_MAX_ENUM(_VK_IMAGE_CREATE_FLAGS)
+} VK_IMAGE_CREATE_FLAGS;
// Depth-stencil view creation flags
-typedef enum _XGL_DEPTH_STENCIL_VIEW_CREATE_FLAGS
+typedef enum _VK_DEPTH_STENCIL_VIEW_CREATE_FLAGS
{
- XGL_DEPTH_STENCIL_VIEW_CREATE_READ_ONLY_DEPTH_BIT = 0x00000001,
- XGL_DEPTH_STENCIL_VIEW_CREATE_READ_ONLY_STENCIL_BIT = 0x00000002,
- XGL_MAX_ENUM(_XGL_DEPTH_STENCIL_VIEW_CREATE_FLAGS)
-} XGL_DEPTH_STENCIL_VIEW_CREATE_FLAGS;
+ VK_DEPTH_STENCIL_VIEW_CREATE_READ_ONLY_DEPTH_BIT = 0x00000001,
+ VK_DEPTH_STENCIL_VIEW_CREATE_READ_ONLY_STENCIL_BIT = 0x00000002,
+ VK_MAX_ENUM(_VK_DEPTH_STENCIL_VIEW_CREATE_FLAGS)
+} VK_DEPTH_STENCIL_VIEW_CREATE_FLAGS;
// Pipeline creation flags
-typedef enum _XGL_PIPELINE_CREATE_FLAGS
+typedef enum _VK_PIPELINE_CREATE_FLAGS
{
- XGL_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT = 0x00000001,
- XGL_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT = 0x00000002,
- XGL_MAX_ENUM(_XGL_PIPELINE_CREATE_FLAGS)
-} XGL_PIPELINE_CREATE_FLAGS;
+ VK_PIPELINE_CREATE_DISABLE_OPTIMIZATION_BIT = 0x00000001,
+ VK_PIPELINE_CREATE_ALLOW_DERIVATIVES_BIT = 0x00000002,
+ VK_MAX_ENUM(_VK_PIPELINE_CREATE_FLAGS)
+} VK_PIPELINE_CREATE_FLAGS;
// Fence creation flags
-typedef enum _XGL_FENCE_CREATE_FLAGS
+typedef enum _VK_FENCE_CREATE_FLAGS
{
- XGL_FENCE_CREATE_SIGNALED_BIT = 0x00000001,
- XGL_MAX_ENUM(_XGL_FENCE_CREATE_FLAGS)
-} XGL_FENCE_CREATE_FLAGS;
+ VK_FENCE_CREATE_SIGNALED_BIT = 0x00000001,
+ VK_MAX_ENUM(_VK_FENCE_CREATE_FLAGS)
+} VK_FENCE_CREATE_FLAGS;
// Semaphore creation flags
-typedef enum _XGL_SEMAPHORE_CREATE_FLAGS
+typedef enum _VK_SEMAPHORE_CREATE_FLAGS
{
- XGL_SEMAPHORE_CREATE_SHAREABLE_BIT = 0x00000001,
- XGL_MAX_ENUM(_XGL_SEMAPHORE_CREATE_FLAGS)
-} XGL_SEMAPHORE_CREATE_FLAGS;
+ VK_SEMAPHORE_CREATE_SHAREABLE_BIT = 0x00000001,
+ VK_MAX_ENUM(_VK_SEMAPHORE_CREATE_FLAGS)
+} VK_SEMAPHORE_CREATE_FLAGS;
// Format capability flags
-typedef enum _XGL_FORMAT_FEATURE_FLAGS
-{
- XGL_FORMAT_IMAGE_SHADER_READ_BIT = 0x00000001,
- XGL_FORMAT_IMAGE_SHADER_WRITE_BIT = 0x00000002,
- XGL_FORMAT_IMAGE_COPY_BIT = 0x00000004,
- XGL_FORMAT_MEMORY_SHADER_ACCESS_BIT = 0x00000008,
- XGL_FORMAT_COLOR_ATTACHMENT_WRITE_BIT = 0x00000010,
- XGL_FORMAT_COLOR_ATTACHMENT_BLEND_BIT = 0x00000020,
- XGL_FORMAT_DEPTH_ATTACHMENT_BIT = 0x00000040,
- XGL_FORMAT_STENCIL_ATTACHMENT_BIT = 0x00000080,
- XGL_FORMAT_MSAA_ATTACHMENT_BIT = 0x00000100,
- XGL_FORMAT_CONVERSION_BIT = 0x00000200,
- XGL_MAX_ENUM(_XGL_FORMAT_FEATURE_FLAGS)
-} XGL_FORMAT_FEATURE_FLAGS;
+typedef enum _VK_FORMAT_FEATURE_FLAGS
+{
+ VK_FORMAT_IMAGE_SHADER_READ_BIT = 0x00000001,
+ VK_FORMAT_IMAGE_SHADER_WRITE_BIT = 0x00000002,
+ VK_FORMAT_IMAGE_COPY_BIT = 0x00000004,
+ VK_FORMAT_MEMORY_SHADER_ACCESS_BIT = 0x00000008,
+ VK_FORMAT_COLOR_ATTACHMENT_WRITE_BIT = 0x00000010,
+ VK_FORMAT_COLOR_ATTACHMENT_BLEND_BIT = 0x00000020,
+ VK_FORMAT_DEPTH_ATTACHMENT_BIT = 0x00000040,
+ VK_FORMAT_STENCIL_ATTACHMENT_BIT = 0x00000080,
+ VK_FORMAT_MSAA_ATTACHMENT_BIT = 0x00000100,
+ VK_FORMAT_CONVERSION_BIT = 0x00000200,
+ VK_MAX_ENUM(_VK_FORMAT_FEATURE_FLAGS)
+} VK_FORMAT_FEATURE_FLAGS;
// Query flags
-typedef enum _XGL_QUERY_CONTROL_FLAGS
+typedef enum _VK_QUERY_CONTROL_FLAGS
{
- XGL_QUERY_IMPRECISE_DATA_BIT = 0x00000001,
- XGL_MAX_ENUM(_XGL_QUERY_CONTROL_FLAGS)
-} XGL_QUERY_CONTROL_FLAGS;
+ VK_QUERY_IMPRECISE_DATA_BIT = 0x00000001,
+ VK_MAX_ENUM(_VK_QUERY_CONTROL_FLAGS)
+} VK_QUERY_CONTROL_FLAGS;
// GPU compatibility flags
-typedef enum _XGL_GPU_COMPATIBILITY_FLAGS
-{
- XGL_GPU_COMPAT_ASIC_FEATURES_BIT = 0x00000001,
- XGL_GPU_COMPAT_IQ_MATCH_BIT = 0x00000002,
- XGL_GPU_COMPAT_PEER_TRANSFER_BIT = 0x00000004,
- XGL_GPU_COMPAT_SHARED_MEMORY_BIT = 0x00000008,
- XGL_GPU_COMPAT_SHARED_SYNC_BIT = 0x00000010,
- XGL_GPU_COMPAT_SHARED_GPU0_DISPLAY_BIT = 0x00000020,
- XGL_GPU_COMPAT_SHARED_GPU1_DISPLAY_BIT = 0x00000040,
- XGL_MAX_ENUM(_XGL_GPU_COMPATIBILITY_FLAGS)
-} XGL_GPU_COMPATIBILITY_FLAGS;
+typedef enum _VK_GPU_COMPATIBILITY_FLAGS
+{
+ VK_GPU_COMPAT_ASIC_FEATURES_BIT = 0x00000001,
+ VK_GPU_COMPAT_IQ_MATCH_BIT = 0x00000002,
+ VK_GPU_COMPAT_PEER_TRANSFER_BIT = 0x00000004,
+ VK_GPU_COMPAT_SHARED_MEMORY_BIT = 0x00000008,
+ VK_GPU_COMPAT_SHARED_SYNC_BIT = 0x00000010,
+ VK_GPU_COMPAT_SHARED_GPU0_DISPLAY_BIT = 0x00000020,
+ VK_GPU_COMPAT_SHARED_GPU1_DISPLAY_BIT = 0x00000040,
+ VK_MAX_ENUM(_VK_GPU_COMPATIBILITY_FLAGS)
+} VK_GPU_COMPATIBILITY_FLAGS;
// Command buffer building flags
-typedef enum _XGL_CMD_BUFFER_BUILD_FLAGS
+typedef enum _VK_CMD_BUFFER_BUILD_FLAGS
{
- XGL_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT = 0x00000001,
- XGL_CMD_BUFFER_OPTIMIZE_PIPELINE_SWITCH_BIT = 0x00000002,
- XGL_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT = 0x00000004,
- XGL_CMD_BUFFER_OPTIMIZE_DESCRIPTOR_SET_SWITCH_BIT = 0x00000008,
- XGL_MAX_ENUM(_XGL_CMD_BUFFER_BUILD_FLAGS)
-} XGL_CMD_BUFFER_BUILD_FLAGS;
+ VK_CMD_BUFFER_OPTIMIZE_GPU_SMALL_BATCH_BIT = 0x00000001,
+ VK_CMD_BUFFER_OPTIMIZE_PIPELINE_SWITCH_BIT = 0x00000002,
+ VK_CMD_BUFFER_OPTIMIZE_ONE_TIME_SUBMIT_BIT = 0x00000004,
+ VK_CMD_BUFFER_OPTIMIZE_DESCRIPTOR_SET_SWITCH_BIT = 0x00000008,
+ VK_MAX_ENUM(_VK_CMD_BUFFER_BUILD_FLAGS)
+} VK_CMD_BUFFER_BUILD_FLAGS;
// ------------------------------------------------------------------------------------------------
-// XGL structures
+// VK structures
-typedef struct _XGL_OFFSET2D
+typedef struct _VK_OFFSET2D
{
int32_t x;
int32_t y;
-} XGL_OFFSET2D;
+} VK_OFFSET2D;
-typedef struct _XGL_OFFSET3D
+typedef struct _VK_OFFSET3D
{
int32_t x;
int32_t y;
int32_t z;
-} XGL_OFFSET3D;
+} VK_OFFSET3D;
-typedef struct _XGL_EXTENT2D
+typedef struct _VK_EXTENT2D
{
int32_t width;
int32_t height;
-} XGL_EXTENT2D;
+} VK_EXTENT2D;
-typedef struct _XGL_EXTENT3D
+typedef struct _VK_EXTENT3D
{
int32_t width;
int32_t height;
int32_t depth;
-} XGL_EXTENT3D;
+} VK_EXTENT3D;
-typedef struct _XGL_VIEWPORT
+typedef struct _VK_VIEWPORT
{
float originX;
float originY;
float height;
float minDepth;
float maxDepth;
-} XGL_VIEWPORT;
+} VK_VIEWPORT;
-typedef struct _XGL_RECT
+typedef struct _VK_RECT
{
- XGL_OFFSET2D offset;
- XGL_EXTENT2D extent;
-} XGL_RECT;
+ VK_OFFSET2D offset;
+ VK_EXTENT2D extent;
+} VK_RECT;
-typedef struct _XGL_CHANNEL_MAPPING
+typedef struct _VK_CHANNEL_MAPPING
{
- XGL_CHANNEL_SWIZZLE r;
- XGL_CHANNEL_SWIZZLE g;
- XGL_CHANNEL_SWIZZLE b;
- XGL_CHANNEL_SWIZZLE a;
-} XGL_CHANNEL_MAPPING;
+ VK_CHANNEL_SWIZZLE r;
+ VK_CHANNEL_SWIZZLE g;
+ VK_CHANNEL_SWIZZLE b;
+ VK_CHANNEL_SWIZZLE a;
+} VK_CHANNEL_MAPPING;
-typedef struct _XGL_PHYSICAL_GPU_PROPERTIES
+typedef struct _VK_PHYSICAL_GPU_PROPERTIES
{
uint32_t apiVersion;
uint32_t driverVersion;
uint32_t vendorId;
uint32_t deviceId;
- XGL_PHYSICAL_GPU_TYPE gpuType;
- char gpuName[XGL_MAX_PHYSICAL_GPU_NAME];
- XGL_GPU_SIZE maxInlineMemoryUpdateSize;
+ VK_PHYSICAL_GPU_TYPE gpuType;
+ char gpuName[VK_MAX_PHYSICAL_GPU_NAME];
+ VK_GPU_SIZE maxInlineMemoryUpdateSize;
uint32_t maxBoundDescriptorSets;
uint32_t maxThreadGroupSize;
uint64_t timestampFrequency;
uint32_t maxDescriptorSets; // at least 2?
uint32_t maxViewports; // at least 16?
uint32_t maxColorAttachments; // at least 8?
-} XGL_PHYSICAL_GPU_PROPERTIES;
+} VK_PHYSICAL_GPU_PROPERTIES;
-typedef struct _XGL_PHYSICAL_GPU_PERFORMANCE
+typedef struct _VK_PHYSICAL_GPU_PERFORMANCE
{
float maxGpuClock;
float aluPerClock;
float texPerClock;
float primsPerClock;
float pixelsPerClock;
-} XGL_PHYSICAL_GPU_PERFORMANCE;
+} VK_PHYSICAL_GPU_PERFORMANCE;
-typedef struct _XGL_GPU_COMPATIBILITY_INFO
+typedef struct _VK_GPU_COMPATIBILITY_INFO
{
- XGL_FLAGS compatibilityFlags; // XGL_GPU_COMPATIBILITY_FLAGS
-} XGL_GPU_COMPATIBILITY_INFO;
+ VK_FLAGS compatibilityFlags; // VK_GPU_COMPATIBILITY_FLAGS
+} VK_GPU_COMPATIBILITY_INFO;
-typedef struct _XGL_APPLICATION_INFO
+typedef struct _VK_APPLICATION_INFO
{
- XGL_STRUCTURE_TYPE sType; // Type of structure. Should be XGL_STRUCTURE_TYPE_APPLICATION_INFO
+ VK_STRUCTURE_TYPE sType; // Type of structure. Should be VK_STRUCTURE_TYPE_APPLICATION_INFO
const void* pNext; // Next structure in chain
const char* pAppName;
uint32_t appVersion;
const char* pEngineName;
uint32_t engineVersion;
uint32_t apiVersion;
-} XGL_APPLICATION_INFO;
+} VK_APPLICATION_INFO;
-typedef void* (XGLAPI *XGL_ALLOC_FUNCTION)(
+typedef void* (VKAPI *VK_ALLOC_FUNCTION)(
void* pUserData,
size_t size,
size_t alignment,
- XGL_SYSTEM_ALLOC_TYPE allocType);
+ VK_SYSTEM_ALLOC_TYPE allocType);
-typedef void (XGLAPI *XGL_FREE_FUNCTION)(
+typedef void (VKAPI *VK_FREE_FUNCTION)(
void* pUserData,
void* pMem);
-typedef struct _XGL_ALLOC_CALLBACKS
+typedef struct _VK_ALLOC_CALLBACKS
{
void* pUserData;
- XGL_ALLOC_FUNCTION pfnAlloc;
- XGL_FREE_FUNCTION pfnFree;
-} XGL_ALLOC_CALLBACKS;
+ VK_ALLOC_FUNCTION pfnAlloc;
+ VK_FREE_FUNCTION pfnFree;
+} VK_ALLOC_CALLBACKS;
-typedef struct _XGL_DEVICE_QUEUE_CREATE_INFO
+typedef struct _VK_DEVICE_QUEUE_CREATE_INFO
{
uint32_t queueNodeIndex;
uint32_t queueCount;
-} XGL_DEVICE_QUEUE_CREATE_INFO;
+} VK_DEVICE_QUEUE_CREATE_INFO;
-typedef struct _XGL_DEVICE_CREATE_INFO
+typedef struct _VK_DEVICE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Should be XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Should be VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t queueRecordCount;
- const XGL_DEVICE_QUEUE_CREATE_INFO* pRequestedQueues;
+ const VK_DEVICE_QUEUE_CREATE_INFO* pRequestedQueues;
uint32_t extensionCount;
const char*const* ppEnabledExtensionNames;
- XGL_VALIDATION_LEVEL maxValidationLevel;
- XGL_FLAGS flags; // XGL_DEVICE_CREATE_FLAGS
-} XGL_DEVICE_CREATE_INFO;
+ VK_VALIDATION_LEVEL maxValidationLevel;
+ VK_FLAGS flags; // VK_DEVICE_CREATE_FLAGS
+} VK_DEVICE_CREATE_INFO;
-typedef struct _XGL_INSTANCE_CREATE_INFO
+typedef struct _VK_INSTANCE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Should be XGL_STRUCTURE_TYPE_INSTANCE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Should be VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO
const void* pNext; // Pointer to next structure
- const XGL_APPLICATION_INFO* pAppInfo;
- const XGL_ALLOC_CALLBACKS* pAllocCb;
+ const VK_APPLICATION_INFO* pAppInfo;
+ const VK_ALLOC_CALLBACKS* pAllocCb;
uint32_t extensionCount;
const char*const* ppEnabledExtensionNames; // layer or extension name to be enabled
-} XGL_INSTANCE_CREATE_INFO;
+} VK_INSTANCE_CREATE_INFO;
-// can be added to XGL_DEVICE_CREATE_INFO or XGL_INSTANCE_CREATE_INFO via pNext
-typedef struct _XGL_LAYER_CREATE_INFO
+// can be added to VK_DEVICE_CREATE_INFO or VK_INSTANCE_CREATE_INFO via pNext
+typedef struct _VK_LAYER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Should be XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Should be VK_STRUCTURE_TYPE_LAYER_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t layerCount;
- const char *const* ppActiveLayerNames; // layer name from the layer's xglEnumerateLayers())
-} XGL_LAYER_CREATE_INFO;
+ const char *const* ppActiveLayerNames; // layer name from the layer's vkEnumerateLayers())
+} VK_LAYER_CREATE_INFO;
-typedef struct _XGL_PHYSICAL_GPU_QUEUE_PROPERTIES
+typedef struct _VK_PHYSICAL_GPU_QUEUE_PROPERTIES
{
- XGL_FLAGS queueFlags; // XGL_QUEUE_FLAGS
+ VK_FLAGS queueFlags; // VK_QUEUE_FLAGS
uint32_t queueCount;
uint32_t maxAtomicCounters;
bool32_t supportsTimestamps;
uint32_t maxMemReferences; // Tells how many memory references can be active for the given queue
-} XGL_PHYSICAL_GPU_QUEUE_PROPERTIES;
+} VK_PHYSICAL_GPU_QUEUE_PROPERTIES;
-typedef struct _XGL_PHYSICAL_GPU_MEMORY_PROPERTIES
+typedef struct _VK_PHYSICAL_GPU_MEMORY_PROPERTIES
{
bool32_t supportsMigration;
bool32_t supportsPinning;
-} XGL_PHYSICAL_GPU_MEMORY_PROPERTIES;
+} VK_PHYSICAL_GPU_MEMORY_PROPERTIES;
-typedef struct _XGL_MEMORY_ALLOC_INFO
+typedef struct _VK_MEMORY_ALLOC_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_ALLOC_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_ALLOC_INFO
const void* pNext; // Pointer to next structure
- XGL_GPU_SIZE allocationSize; // Size of memory allocation
- XGL_FLAGS memProps; // XGL_MEMORY_PROPERTY_FLAGS
- XGL_MEMORY_TYPE memType;
- XGL_MEMORY_PRIORITY memPriority;
-} XGL_MEMORY_ALLOC_INFO;
+ VK_GPU_SIZE allocationSize; // Size of memory allocation
+ VK_FLAGS memProps; // VK_MEMORY_PROPERTY_FLAGS
+ VK_MEMORY_TYPE memType;
+ VK_MEMORY_PRIORITY memPriority;
+} VK_MEMORY_ALLOC_INFO;
-// This structure is included in the XGL_MEMORY_ALLOC_INFO chain
+// This structure is included in the VK_MEMORY_ALLOC_INFO chain
// for memory regions allocated for buffer usage.
-typedef struct _XGL_MEMORY_ALLOC_BUFFER_INFO
+typedef struct _VK_MEMORY_ALLOC_BUFFER_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_ALLOC_BUFFER_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS usage; // XGL_BUFFER_USAGE_FLAGS
-} XGL_MEMORY_ALLOC_BUFFER_INFO;
+ VK_FLAGS usage; // VK_BUFFER_USAGE_FLAGS
+} VK_MEMORY_ALLOC_BUFFER_INFO;
-// This structure is included in the XGL_MEMORY_ALLOC_INFO chain
+// This structure is included in the VK_MEMORY_ALLOC_INFO chain
// for memory regions allocated for image usage.
-typedef struct _XGL_MEMORY_ALLOC_IMAGE_INFO
+typedef struct _VK_MEMORY_ALLOC_IMAGE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_ALLOC_IMAGE_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- XGL_IMAGE_FORMAT_CLASS formatClass;
+ VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ VK_IMAGE_FORMAT_CLASS formatClass;
uint32_t samples;
-} XGL_MEMORY_ALLOC_IMAGE_INFO;
+} VK_MEMORY_ALLOC_IMAGE_INFO;
-typedef struct _XGL_MEMORY_OPEN_INFO
+typedef struct _VK_MEMORY_OPEN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_OPEN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_OPEN_INFO
const void* pNext; // Pointer to next structure
- XGL_GPU_MEMORY sharedMem;
-} XGL_MEMORY_OPEN_INFO;
+ VK_GPU_MEMORY sharedMem;
+} VK_MEMORY_OPEN_INFO;
-typedef struct _XGL_PEER_MEMORY_OPEN_INFO
+typedef struct _VK_PEER_MEMORY_OPEN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PEER_MEMORY_OPEN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PEER_MEMORY_OPEN_INFO
const void* pNext; // Pointer to next structure
- XGL_GPU_MEMORY originalMem;
-} XGL_PEER_MEMORY_OPEN_INFO;
+ VK_GPU_MEMORY originalMem;
+} VK_PEER_MEMORY_OPEN_INFO;
-typedef struct _XGL_MEMORY_REQUIREMENTS
+typedef struct _VK_MEMORY_REQUIREMENTS
{
- XGL_GPU_SIZE size; // Specified in bytes
- XGL_GPU_SIZE alignment; // Specified in bytes
- XGL_GPU_SIZE granularity; // Granularity on which xglBindObjectMemoryRange can bind sub-ranges of memory specified in bytes (usually the page size)
- XGL_FLAGS memProps; // XGL_MEMORY_PROPERTY_FLAGS
- XGL_MEMORY_TYPE memType;
-} XGL_MEMORY_REQUIREMENTS;
+ VK_GPU_SIZE size; // Specified in bytes
+ VK_GPU_SIZE alignment; // Specified in bytes
+ VK_GPU_SIZE granularity; // Granularity on which vkBindObjectMemoryRange can bind sub-ranges of memory specified in bytes (usually the page size)
+ VK_FLAGS memProps; // VK_MEMORY_PROPERTY_FLAGS
+ VK_MEMORY_TYPE memType;
+} VK_MEMORY_REQUIREMENTS;
-typedef struct _XGL_BUFFER_MEMORY_REQUIREMENTS
+typedef struct _VK_BUFFER_MEMORY_REQUIREMENTS
{
- XGL_FLAGS usage; // XGL_BUFFER_USAGE_FLAGS
-} XGL_BUFFER_MEMORY_REQUIREMENTS;
+ VK_FLAGS usage; // VK_BUFFER_USAGE_FLAGS
+} VK_BUFFER_MEMORY_REQUIREMENTS;
-typedef struct _XGL_IMAGE_MEMORY_REQUIREMENTS
+typedef struct _VK_IMAGE_MEMORY_REQUIREMENTS
{
- XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- XGL_IMAGE_FORMAT_CLASS formatClass;
+ VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ VK_IMAGE_FORMAT_CLASS formatClass;
uint32_t samples;
-} XGL_IMAGE_MEMORY_REQUIREMENTS;
+} VK_IMAGE_MEMORY_REQUIREMENTS;
-typedef struct _XGL_FORMAT_PROPERTIES
+typedef struct _VK_FORMAT_PROPERTIES
{
- XGL_FLAGS linearTilingFeatures; // XGL_FORMAT_FEATURE_FLAGS
- XGL_FLAGS optimalTilingFeatures; // XGL_FORMAT_FEATURE_FLAGS
-} XGL_FORMAT_PROPERTIES;
+ VK_FLAGS linearTilingFeatures; // VK_FORMAT_FEATURE_FLAGS
+ VK_FLAGS optimalTilingFeatures; // VK_FORMAT_FEATURE_FLAGS
+} VK_FORMAT_PROPERTIES;
-typedef struct _XGL_BUFFER_VIEW_ATTACH_INFO
+typedef struct _VK_BUFFER_VIEW_ATTACH_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_BUFFER_VIEW_ATTACH_INFO
const void* pNext; // Pointer to next structure
- XGL_BUFFER_VIEW view;
-} XGL_BUFFER_VIEW_ATTACH_INFO;
+ VK_BUFFER_VIEW view;
+} VK_BUFFER_VIEW_ATTACH_INFO;
-typedef struct _XGL_IMAGE_VIEW_ATTACH_INFO
+typedef struct _VK_IMAGE_VIEW_ATTACH_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_VIEW_ATTACH_INFO
const void* pNext; // Pointer to next structure
- XGL_IMAGE_VIEW view;
- XGL_IMAGE_LAYOUT layout;
-} XGL_IMAGE_VIEW_ATTACH_INFO;
+ VK_IMAGE_VIEW view;
+ VK_IMAGE_LAYOUT layout;
+} VK_IMAGE_VIEW_ATTACH_INFO;
-typedef struct _XGL_UPDATE_SAMPLERS
+typedef struct _VK_UPDATE_SAMPLERS
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_SAMPLERS
const void* pNext; // Pointer to next structure
uint32_t binding; // Binding of the sampler (array)
uint32_t arrayIndex; // First element of the array to update or zero otherwise
uint32_t count; // Number of elements to update
- const XGL_SAMPLER* pSamplers;
-} XGL_UPDATE_SAMPLERS;
+ const VK_SAMPLER* pSamplers;
+} VK_UPDATE_SAMPLERS;
-typedef struct _XGL_SAMPLER_IMAGE_VIEW_INFO
+typedef struct _VK_SAMPLER_IMAGE_VIEW_INFO
{
- XGL_SAMPLER sampler;
- const XGL_IMAGE_VIEW_ATTACH_INFO* pImageView;
-} XGL_SAMPLER_IMAGE_VIEW_INFO;
+ VK_SAMPLER sampler;
+ const VK_IMAGE_VIEW_ATTACH_INFO* pImageView;
+} VK_SAMPLER_IMAGE_VIEW_INFO;
-typedef struct _XGL_UPDATE_SAMPLER_TEXTURES
+typedef struct _VK_UPDATE_SAMPLER_TEXTURES
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES
const void* pNext; // Pointer to next structure
uint32_t binding; // Binding of the combined texture sampler (array)
uint32_t arrayIndex; // First element of the array to update or zero otherwise
uint32_t count; // Number of elements to update
- const XGL_SAMPLER_IMAGE_VIEW_INFO* pSamplerImageViews;
-} XGL_UPDATE_SAMPLER_TEXTURES;
+ const VK_SAMPLER_IMAGE_VIEW_INFO* pSamplerImageViews;
+} VK_UPDATE_SAMPLER_TEXTURES;
-typedef struct _XGL_UPDATE_IMAGES
+typedef struct _VK_UPDATE_IMAGES
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_IMAGES
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_IMAGES
const void* pNext; // Pointer to next structure
- XGL_DESCRIPTOR_TYPE descriptorType;
+ VK_DESCRIPTOR_TYPE descriptorType;
uint32_t binding; // Binding of the image (array)
uint32_t arrayIndex; // First element of the array to update or zero otherwise
uint32_t count; // Number of elements to update
- const XGL_IMAGE_VIEW_ATTACH_INFO* pImageViews;
-} XGL_UPDATE_IMAGES;
+ const VK_IMAGE_VIEW_ATTACH_INFO* pImageViews;
+} VK_UPDATE_IMAGES;
-typedef struct _XGL_UPDATE_BUFFERS
+typedef struct _VK_UPDATE_BUFFERS
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_BUFFERS
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_BUFFERS
const void* pNext; // Pointer to next structure
- XGL_DESCRIPTOR_TYPE descriptorType;
+ VK_DESCRIPTOR_TYPE descriptorType;
uint32_t binding; // Binding of the buffer (array)
uint32_t arrayIndex; // First element of the array to update or zero otherwise
uint32_t count; // Number of elements to update
- const XGL_BUFFER_VIEW_ATTACH_INFO* pBufferViews;
-} XGL_UPDATE_BUFFERS;
+ const VK_BUFFER_VIEW_ATTACH_INFO* pBufferViews;
+} VK_UPDATE_BUFFERS;
-typedef struct _XGL_UPDATE_AS_COPY
+typedef struct _VK_UPDATE_AS_COPY
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_UPDATE_AS_COPY
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_UPDATE_AS_COPY
const void* pNext; // Pointer to next structure
- XGL_DESCRIPTOR_TYPE descriptorType;
- XGL_DESCRIPTOR_SET descriptorSet;
+ VK_DESCRIPTOR_TYPE descriptorType;
+ VK_DESCRIPTOR_SET descriptorSet;
uint32_t binding;
uint32_t arrayElement;
uint32_t count;
-} XGL_UPDATE_AS_COPY;
+} VK_UPDATE_AS_COPY;
-typedef struct _XGL_BUFFER_CREATE_INFO
+typedef struct _VK_BUFFER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_BUFFER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO
const void* pNext; // Pointer to next structure.
- XGL_GPU_SIZE size; // Specified in bytes
- XGL_FLAGS usage; // XGL_BUFFER_USAGE_FLAGS
- XGL_FLAGS flags; // XGL_BUFFER_CREATE_FLAGS
-} XGL_BUFFER_CREATE_INFO;
+ VK_GPU_SIZE size; // Specified in bytes
+ VK_FLAGS usage; // VK_BUFFER_USAGE_FLAGS
+ VK_FLAGS flags; // VK_BUFFER_CREATE_FLAGS
+} VK_BUFFER_CREATE_INFO;
-typedef struct _XGL_BUFFER_VIEW_CREATE_INFO
+typedef struct _VK_BUFFER_VIEW_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_BUFFER_VIEW_CREATE_INFO
const void* pNext; // Pointer to next structure.
- XGL_BUFFER buffer;
- XGL_BUFFER_VIEW_TYPE viewType;
- XGL_FORMAT format; // Optionally specifies format of elements
- XGL_GPU_SIZE offset; // Specified in bytes
- XGL_GPU_SIZE range; // View size specified in bytes
-} XGL_BUFFER_VIEW_CREATE_INFO;
+ VK_BUFFER buffer;
+ VK_BUFFER_VIEW_TYPE viewType;
+ VK_FORMAT format; // Optionally specifies format of elements
+ VK_GPU_SIZE offset; // Specified in bytes
+ VK_GPU_SIZE range; // View size specified in bytes
+} VK_BUFFER_VIEW_CREATE_INFO;
-typedef struct _XGL_IMAGE_SUBRESOURCE
+typedef struct _VK_IMAGE_SUBRESOURCE
{
- XGL_IMAGE_ASPECT aspect;
+ VK_IMAGE_ASPECT aspect;
uint32_t mipLevel;
uint32_t arraySlice;
-} XGL_IMAGE_SUBRESOURCE;
+} VK_IMAGE_SUBRESOURCE;
-typedef struct _XGL_IMAGE_SUBRESOURCE_RANGE
+typedef struct _VK_IMAGE_SUBRESOURCE_RANGE
{
- XGL_IMAGE_ASPECT aspect;
+ VK_IMAGE_ASPECT aspect;
uint32_t baseMipLevel;
uint32_t mipLevels;
uint32_t baseArraySlice;
uint32_t arraySize;
-} XGL_IMAGE_SUBRESOURCE_RANGE;
+} VK_IMAGE_SUBRESOURCE_RANGE;
-typedef struct _XGL_EVENT_WAIT_INFO
+typedef struct _VK_EVENT_WAIT_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_EVENT_WAIT_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_EVENT_WAIT_INFO
const void* pNext; // Pointer to next structure.
uint32_t eventCount; // Number of events to wait on
- const XGL_EVENT* pEvents; // Array of event objects to wait on
+ const VK_EVENT* pEvents; // Array of event objects to wait on
- XGL_WAIT_EVENT waitEvent; // Pipeline event where the wait should happen
+ VK_WAIT_EVENT waitEvent; // Pipeline event where the wait should happen
uint32_t memBarrierCount; // Number of memory barriers
- const void** ppMemBarriers; // Array of pointers to memory barriers (any of them can be either XGL_MEMORY_BARRIER, XGL_BUFFER_MEMORY_BARRIER, or XGL_IMAGE_MEMORY_BARRIER)
-} XGL_EVENT_WAIT_INFO;
+ const void** ppMemBarriers; // Array of pointers to memory barriers (any of them can be either VK_MEMORY_BARRIER, VK_BUFFER_MEMORY_BARRIER, or VK_IMAGE_MEMORY_BARRIER)
+} VK_EVENT_WAIT_INFO;
-typedef struct _XGL_PIPELINE_BARRIER
+typedef struct _VK_PIPELINE_BARRIER
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_BARRIER
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_BARRIER
const void* pNext; // Pointer to next structure.
uint32_t eventCount; // Number of events to wait on
- const XGL_PIPE_EVENT* pEvents; // Array of pipeline events to wait on
+ const VK_PIPE_EVENT* pEvents; // Array of pipeline events to wait on
- XGL_WAIT_EVENT waitEvent; // Pipeline event where the wait should happen
+ VK_WAIT_EVENT waitEvent; // Pipeline event where the wait should happen
uint32_t memBarrierCount; // Number of memory barriers
- const void** ppMemBarriers; // Array of pointers to memory barriers (any of them can be either XGL_MEMORY_BARRIER, XGL_BUFFER_MEMORY_BARRIER, or XGL_IMAGE_MEMORY_BARRIER)
-} XGL_PIPELINE_BARRIER;
+ const void** ppMemBarriers; // Array of pointers to memory barriers (any of them can be either VK_MEMORY_BARRIER, VK_BUFFER_MEMORY_BARRIER, or VK_IMAGE_MEMORY_BARRIER)
+} VK_PIPELINE_BARRIER;
-typedef struct _XGL_MEMORY_BARRIER
+typedef struct _VK_MEMORY_BARRIER
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_MEMORY_BARRIER
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_MEMORY_BARRIER
const void* pNext; // Pointer to next structure.
- XGL_FLAGS outputMask; // Outputs the barrier should sync (see XGL_MEMORY_OUTPUT_FLAGS)
- XGL_FLAGS inputMask; // Inputs the barrier should sync to (see XGL_MEMORY_INPUT_FLAGS)
-} XGL_MEMORY_BARRIER;
+ VK_FLAGS outputMask; // Outputs the barrier should sync (see VK_MEMORY_OUTPUT_FLAGS)
+ VK_FLAGS inputMask; // Inputs the barrier should sync to (see VK_MEMORY_INPUT_FLAGS)
+} VK_MEMORY_BARRIER;
-typedef struct _XGL_BUFFER_MEMORY_BARRIER
+typedef struct _VK_BUFFER_MEMORY_BARRIER
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER
const void* pNext; // Pointer to next structure.
- XGL_FLAGS outputMask; // Outputs the barrier should sync (see XGL_MEMORY_OUTPUT_FLAGS)
- XGL_FLAGS inputMask; // Inputs the barrier should sync to (see XGL_MEMORY_INPUT_FLAGS)
+ VK_FLAGS outputMask; // Outputs the barrier should sync (see VK_MEMORY_OUTPUT_FLAGS)
+ VK_FLAGS inputMask; // Inputs the barrier should sync to (see VK_MEMORY_INPUT_FLAGS)
- XGL_BUFFER buffer; // Buffer to sync
+ VK_BUFFER buffer; // Buffer to sync
- XGL_GPU_SIZE offset; // Offset within the buffer to sync
- XGL_GPU_SIZE size; // Amount of bytes to sync
-} XGL_BUFFER_MEMORY_BARRIER;
+ VK_GPU_SIZE offset; // Offset within the buffer to sync
+ VK_GPU_SIZE size; // Amount of bytes to sync
+} VK_BUFFER_MEMORY_BARRIER;
-typedef struct _XGL_IMAGE_MEMORY_BARRIER
+typedef struct _VK_IMAGE_MEMORY_BARRIER
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER
const void* pNext; // Pointer to next structure.
- XGL_FLAGS outputMask; // Outputs the barrier should sync (see XGL_MEMORY_OUTPUT_FLAGS)
- XGL_FLAGS inputMask; // Inputs the barrier should sync to (see XGL_MEMORY_INPUT_FLAGS)
+ VK_FLAGS outputMask; // Outputs the barrier should sync (see VK_MEMORY_OUTPUT_FLAGS)
+ VK_FLAGS inputMask; // Inputs the barrier should sync to (see VK_MEMORY_INPUT_FLAGS)
- XGL_IMAGE_LAYOUT oldLayout; // Current layout of the image
- XGL_IMAGE_LAYOUT newLayout; // New layout to transition the image to
+ VK_IMAGE_LAYOUT oldLayout; // Current layout of the image
+ VK_IMAGE_LAYOUT newLayout; // New layout to transition the image to
- XGL_IMAGE image; // Image to sync
+ VK_IMAGE image; // Image to sync
- XGL_IMAGE_SUBRESOURCE_RANGE subresourceRange; // Subresource range to sync
-} XGL_IMAGE_MEMORY_BARRIER;
+ VK_IMAGE_SUBRESOURCE_RANGE subresourceRange; // Subresource range to sync
+} VK_IMAGE_MEMORY_BARRIER;
-typedef struct _XGL_IMAGE_CREATE_INFO
+typedef struct _VK_IMAGE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO
const void* pNext; // Pointer to next structure.
- XGL_IMAGE_TYPE imageType;
- XGL_FORMAT format;
- XGL_EXTENT3D extent;
+ VK_IMAGE_TYPE imageType;
+ VK_FORMAT format;
+ VK_EXTENT3D extent;
uint32_t mipLevels;
uint32_t arraySize;
uint32_t samples;
- XGL_IMAGE_TILING tiling;
- XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- XGL_FLAGS flags; // XGL_IMAGE_CREATE_FLAGS
-} XGL_IMAGE_CREATE_INFO;
+ VK_IMAGE_TILING tiling;
+ VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ VK_FLAGS flags; // VK_IMAGE_CREATE_FLAGS
+} VK_IMAGE_CREATE_INFO;
-typedef struct _XGL_PEER_IMAGE_OPEN_INFO
+typedef struct _VK_PEER_IMAGE_OPEN_INFO
{
- XGL_IMAGE originalImage;
-} XGL_PEER_IMAGE_OPEN_INFO;
+ VK_IMAGE originalImage;
+} VK_PEER_IMAGE_OPEN_INFO;
-typedef struct _XGL_SUBRESOURCE_LAYOUT
+typedef struct _VK_SUBRESOURCE_LAYOUT
{
- XGL_GPU_SIZE offset; // Specified in bytes
- XGL_GPU_SIZE size; // Specified in bytes
- XGL_GPU_SIZE rowPitch; // Specified in bytes
- XGL_GPU_SIZE depthPitch; // Specified in bytes
-} XGL_SUBRESOURCE_LAYOUT;
+ VK_GPU_SIZE offset; // Specified in bytes
+ VK_GPU_SIZE size; // Specified in bytes
+ VK_GPU_SIZE rowPitch; // Specified in bytes
+ VK_GPU_SIZE depthPitch; // Specified in bytes
+} VK_SUBRESOURCE_LAYOUT;
-typedef struct _XGL_IMAGE_VIEW_CREATE_INFO
+typedef struct _VK_IMAGE_VIEW_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_IMAGE image;
- XGL_IMAGE_VIEW_TYPE viewType;
- XGL_FORMAT format;
- XGL_CHANNEL_MAPPING channels;
- XGL_IMAGE_SUBRESOURCE_RANGE subresourceRange;
+ VK_IMAGE image;
+ VK_IMAGE_VIEW_TYPE viewType;
+ VK_FORMAT format;
+ VK_CHANNEL_MAPPING channels;
+ VK_IMAGE_SUBRESOURCE_RANGE subresourceRange;
float minLod;
-} XGL_IMAGE_VIEW_CREATE_INFO;
+} VK_IMAGE_VIEW_CREATE_INFO;
-typedef struct _XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO
+typedef struct _VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_COLOR_ATTACHMENT_VIEW_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_IMAGE image;
- XGL_FORMAT format;
+ VK_IMAGE image;
+ VK_FORMAT format;
uint32_t mipLevel;
uint32_t baseArraySlice;
uint32_t arraySize;
- XGL_IMAGE msaaResolveImage;
- XGL_IMAGE_SUBRESOURCE_RANGE msaaResolveSubResource;
-} XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO;
+ VK_IMAGE msaaResolveImage;
+ VK_IMAGE_SUBRESOURCE_RANGE msaaResolveSubResource;
+} VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO;
-typedef struct _XGL_DEPTH_STENCIL_VIEW_CREATE_INFO
+typedef struct _VK_DEPTH_STENCIL_VIEW_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DEPTH_STENCIL_VIEW_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_IMAGE image;
+ VK_IMAGE image;
uint32_t mipLevel;
uint32_t baseArraySlice;
uint32_t arraySize;
- XGL_IMAGE msaaResolveImage;
- XGL_IMAGE_SUBRESOURCE_RANGE msaaResolveSubResource;
- XGL_FLAGS flags; // XGL_DEPTH_STENCIL_VIEW_CREATE_FLAGS
-} XGL_DEPTH_STENCIL_VIEW_CREATE_INFO;
+ VK_IMAGE msaaResolveImage;
+ VK_IMAGE_SUBRESOURCE_RANGE msaaResolveSubResource;
+ VK_FLAGS flags; // VK_DEPTH_STENCIL_VIEW_CREATE_FLAGS
+} VK_DEPTH_STENCIL_VIEW_CREATE_INFO;
-typedef struct _XGL_COLOR_ATTACHMENT_BIND_INFO
+typedef struct _VK_COLOR_ATTACHMENT_BIND_INFO
{
- XGL_COLOR_ATTACHMENT_VIEW view;
- XGL_IMAGE_LAYOUT layout;
-} XGL_COLOR_ATTACHMENT_BIND_INFO;
+ VK_COLOR_ATTACHMENT_VIEW view;
+ VK_IMAGE_LAYOUT layout;
+} VK_COLOR_ATTACHMENT_BIND_INFO;
-typedef struct _XGL_DEPTH_STENCIL_BIND_INFO
+typedef struct _VK_DEPTH_STENCIL_BIND_INFO
{
- XGL_DEPTH_STENCIL_VIEW view;
- XGL_IMAGE_LAYOUT layout;
-} XGL_DEPTH_STENCIL_BIND_INFO;
+ VK_DEPTH_STENCIL_VIEW view;
+ VK_IMAGE_LAYOUT layout;
+} VK_DEPTH_STENCIL_BIND_INFO;
-typedef struct _XGL_BUFFER_COPY
+typedef struct _VK_BUFFER_COPY
{
- XGL_GPU_SIZE srcOffset; // Specified in bytes
- XGL_GPU_SIZE destOffset; // Specified in bytes
- XGL_GPU_SIZE copySize; // Specified in bytes
-} XGL_BUFFER_COPY;
+ VK_GPU_SIZE srcOffset; // Specified in bytes
+ VK_GPU_SIZE destOffset; // Specified in bytes
+ VK_GPU_SIZE copySize; // Specified in bytes
+} VK_BUFFER_COPY;
-typedef struct _XGL_IMAGE_MEMORY_BIND_INFO
+typedef struct _VK_IMAGE_MEMORY_BIND_INFO
{
- XGL_IMAGE_SUBRESOURCE subresource;
- XGL_OFFSET3D offset;
- XGL_EXTENT3D extent;
-} XGL_IMAGE_MEMORY_BIND_INFO;
+ VK_IMAGE_SUBRESOURCE subresource;
+ VK_OFFSET3D offset;
+ VK_EXTENT3D extent;
+} VK_IMAGE_MEMORY_BIND_INFO;
-typedef struct _XGL_IMAGE_COPY
+typedef struct _VK_IMAGE_COPY
{
- XGL_IMAGE_SUBRESOURCE srcSubresource;
- XGL_OFFSET3D srcOffset;
- XGL_IMAGE_SUBRESOURCE destSubresource;
- XGL_OFFSET3D destOffset;
- XGL_EXTENT3D extent;
-} XGL_IMAGE_COPY;
+ VK_IMAGE_SUBRESOURCE srcSubresource;
+ VK_OFFSET3D srcOffset;
+ VK_IMAGE_SUBRESOURCE destSubresource;
+ VK_OFFSET3D destOffset;
+ VK_EXTENT3D extent;
+} VK_IMAGE_COPY;
-typedef struct _XGL_IMAGE_BLIT
+typedef struct _VK_IMAGE_BLIT
{
- XGL_IMAGE_SUBRESOURCE srcSubresource;
- XGL_OFFSET3D srcOffset;
- XGL_EXTENT3D srcExtent;
- XGL_IMAGE_SUBRESOURCE destSubresource;
- XGL_OFFSET3D destOffset;
- XGL_EXTENT3D destExtent;
-} XGL_IMAGE_BLIT;
+ VK_IMAGE_SUBRESOURCE srcSubresource;
+ VK_OFFSET3D srcOffset;
+ VK_EXTENT3D srcExtent;
+ VK_IMAGE_SUBRESOURCE destSubresource;
+ VK_OFFSET3D destOffset;
+ VK_EXTENT3D destExtent;
+} VK_IMAGE_BLIT;
-typedef struct _XGL_BUFFER_IMAGE_COPY
+typedef struct _VK_BUFFER_IMAGE_COPY
{
- XGL_GPU_SIZE bufferOffset; // Specified in bytes
- XGL_IMAGE_SUBRESOURCE imageSubresource;
- XGL_OFFSET3D imageOffset;
- XGL_EXTENT3D imageExtent;
-} XGL_BUFFER_IMAGE_COPY;
+ VK_GPU_SIZE bufferOffset; // Specified in bytes
+ VK_IMAGE_SUBRESOURCE imageSubresource;
+ VK_OFFSET3D imageOffset;
+ VK_EXTENT3D imageExtent;
+} VK_BUFFER_IMAGE_COPY;
-typedef struct _XGL_IMAGE_RESOLVE
+typedef struct _VK_IMAGE_RESOLVE
{
- XGL_IMAGE_SUBRESOURCE srcSubresource;
- XGL_OFFSET2D srcOffset;
- XGL_IMAGE_SUBRESOURCE destSubresource;
- XGL_OFFSET2D destOffset;
- XGL_EXTENT2D extent;
-} XGL_IMAGE_RESOLVE;
+ VK_IMAGE_SUBRESOURCE srcSubresource;
+ VK_OFFSET2D srcOffset;
+ VK_IMAGE_SUBRESOURCE destSubresource;
+ VK_OFFSET2D destOffset;
+ VK_EXTENT2D extent;
+} VK_IMAGE_RESOLVE;
-typedef struct _XGL_SHADER_CREATE_INFO
+typedef struct _VK_SHADER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SHADER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SHADER_CREATE_INFO
const void* pNext; // Pointer to next structure
size_t codeSize; // Specified in bytes
const void* pCode;
- XGL_FLAGS flags; // Reserved
-} XGL_SHADER_CREATE_INFO;
+ VK_FLAGS flags; // Reserved
+} VK_SHADER_CREATE_INFO;
-typedef struct _XGL_DESCRIPTOR_SET_LAYOUT_BINDING
+typedef struct _VK_DESCRIPTOR_SET_LAYOUT_BINDING
{
- XGL_DESCRIPTOR_TYPE descriptorType;
+ VK_DESCRIPTOR_TYPE descriptorType;
uint32_t count;
- XGL_FLAGS stageFlags; // XGL_SHADER_STAGE_FLAGS
- const XGL_SAMPLER* pImmutableSamplers;
-} XGL_DESCRIPTOR_SET_LAYOUT_BINDING;
+ VK_FLAGS stageFlags; // VK_SHADER_STAGE_FLAGS
+ const VK_SAMPLER* pImmutableSamplers;
+} VK_DESCRIPTOR_SET_LAYOUT_BINDING;
-typedef struct _XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
+typedef struct _VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DESCRIPTOR_SET_LAYOUT_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t count; // Number of bindings in the descriptor set layout
- const XGL_DESCRIPTOR_SET_LAYOUT_BINDING* pBinding; // Array of descriptor set layout bindings
-} XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
+ const VK_DESCRIPTOR_SET_LAYOUT_BINDING* pBinding; // Array of descriptor set layout bindings
+} VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO;
-typedef struct _XGL_DESCRIPTOR_TYPE_COUNT
+typedef struct _VK_DESCRIPTOR_TYPE_COUNT
{
- XGL_DESCRIPTOR_TYPE type;
+ VK_DESCRIPTOR_TYPE type;
uint32_t count;
-} XGL_DESCRIPTOR_TYPE_COUNT;
+} VK_DESCRIPTOR_TYPE_COUNT;
-typedef struct _XGL_DESCRIPTOR_POOL_CREATE_INFO
+typedef struct _VK_DESCRIPTOR_POOL_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DESCRIPTOR_POOL_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t count;
- const XGL_DESCRIPTOR_TYPE_COUNT* pTypeCount;
-} XGL_DESCRIPTOR_POOL_CREATE_INFO;
+ const VK_DESCRIPTOR_TYPE_COUNT* pTypeCount;
+} VK_DESCRIPTOR_POOL_CREATE_INFO;
-typedef struct _XGL_LINK_CONST_BUFFER
+typedef struct _VK_LINK_CONST_BUFFER
{
uint32_t bufferId;
size_t bufferSize;
const void* pBufferData;
-} XGL_LINK_CONST_BUFFER;
+} VK_LINK_CONST_BUFFER;
-typedef struct _XGL_SPECIALIZATION_MAP_ENTRY
+typedef struct _VK_SPECIALIZATION_MAP_ENTRY
{
uint32_t constantId; // The SpecConstant ID specified in the BIL
uint32_t offset; // Offset of the value in the data block
-} XGL_SPECIALIZATION_MAP_ENTRY;
+} VK_SPECIALIZATION_MAP_ENTRY;
-typedef struct _XGL_SPECIALIZATION_INFO
+typedef struct _VK_SPECIALIZATION_INFO
{
uint32_t mapEntryCount;
- const XGL_SPECIALIZATION_MAP_ENTRY* pMap; // mapEntryCount entries
+ const VK_SPECIALIZATION_MAP_ENTRY* pMap; // mapEntryCount entries
const void* pData;
-} XGL_SPECIALIZATION_INFO;
+} VK_SPECIALIZATION_INFO;
-typedef struct _XGL_PIPELINE_SHADER
+typedef struct _VK_PIPELINE_SHADER
{
- XGL_PIPELINE_SHADER_STAGE stage;
- XGL_SHADER shader;
+ VK_PIPELINE_SHADER_STAGE stage;
+ VK_SHADER shader;
uint32_t linkConstBufferCount;
- const XGL_LINK_CONST_BUFFER* pLinkConstBufferInfo;
- const XGL_SPECIALIZATION_INFO* pSpecializationInfo;
-} XGL_PIPELINE_SHADER;
+ const VK_LINK_CONST_BUFFER* pLinkConstBufferInfo;
+ const VK_SPECIALIZATION_INFO* pSpecializationInfo;
+} VK_PIPELINE_SHADER;
-typedef struct _XGL_COMPUTE_PIPELINE_CREATE_INFO
+typedef struct _VK_COMPUTE_PIPELINE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_COMPUTE_PIPELINE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_PIPELINE_SHADER cs;
- XGL_FLAGS flags; // XGL_PIPELINE_CREATE_FLAGS
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN setLayoutChain; // For local size fields zero is treated an invalid value
+ VK_PIPELINE_SHADER cs;
+ VK_FLAGS flags; // VK_PIPELINE_CREATE_FLAGS
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN setLayoutChain; // For local size fields zero is treated an invalid value
uint32_t localSizeX;
uint32_t localSizeY;
uint32_t localSizeZ;
-} XGL_COMPUTE_PIPELINE_CREATE_INFO;
+} VK_COMPUTE_PIPELINE_CREATE_INFO;
-typedef struct _XGL_VERTEX_INPUT_BINDING_DESCRIPTION
+typedef struct _VK_VERTEX_INPUT_BINDING_DESCRIPTION
{
uint32_t binding; // Vertex buffer binding id
uint32_t strideInBytes; // Distance between vertices in bytes (0 = no advancement)
- XGL_VERTEX_INPUT_STEP_RATE stepRate; // Rate at which binding is incremented
-} XGL_VERTEX_INPUT_BINDING_DESCRIPTION;
+ VK_VERTEX_INPUT_STEP_RATE stepRate; // Rate at which binding is incremented
+} VK_VERTEX_INPUT_BINDING_DESCRIPTION;
-typedef struct _XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION
+typedef struct _VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION
{
uint32_t location; // location of the shader vertex attrib
uint32_t binding; // Vertex buffer binding id
- XGL_FORMAT format; // format of source data
+ VK_FORMAT format; // format of source data
uint32_t offsetInBytes; // Offset of first element in bytes from base of vertex
-} XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION;
+} VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION;
-typedef struct _XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO
+typedef struct _VK_PIPELINE_VERTEX_INPUT_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Should be XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Should be VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t bindingCount; // number of bindings
- const XGL_VERTEX_INPUT_BINDING_DESCRIPTION* pVertexBindingDescriptions;
+ const VK_VERTEX_INPUT_BINDING_DESCRIPTION* pVertexBindingDescriptions;
uint32_t attributeCount; // number of attributes
- const XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* pVertexAttributeDescriptions;
-} XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO;
+ const VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* pVertexAttributeDescriptions;
+} VK_PIPELINE_VERTEX_INPUT_CREATE_INFO;
-typedef struct _XGL_PIPELINE_IA_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_IA_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_PRIMITIVE_TOPOLOGY topology;
+ VK_PRIMITIVE_TOPOLOGY topology;
bool32_t disableVertexReuse; // optional
bool32_t primitiveRestartEnable;
uint32_t primitiveRestartIndex; // optional (GL45)
-} XGL_PIPELINE_IA_STATE_CREATE_INFO;
+} VK_PIPELINE_IA_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_TESS_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_TESS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t patchControlPoints;
-} XGL_PIPELINE_TESS_STATE_CREATE_INFO;
+} VK_PIPELINE_TESS_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_VP_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_VP_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t numViewports;
- XGL_COORDINATE_ORIGIN clipOrigin; // optional (GL45)
- XGL_DEPTH_MODE depthMode; // optional (GL45)
-} XGL_PIPELINE_VP_STATE_CREATE_INFO;
+ VK_COORDINATE_ORIGIN clipOrigin; // optional (GL45)
+ VK_DEPTH_MODE depthMode; // optional (GL45)
+} VK_PIPELINE_VP_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_RS_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_RS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
bool32_t depthClipEnable;
bool32_t rasterizerDiscardEnable;
bool32_t programPointSize; // optional (GL45)
- XGL_COORDINATE_ORIGIN pointOrigin; // optional (GL45)
- XGL_PROVOKING_VERTEX_CONVENTION provokingVertex; // optional (GL45)
- XGL_FILL_MODE fillMode; // optional (GL45)
- XGL_CULL_MODE cullMode;
- XGL_FACE_ORIENTATION frontFace;
-} XGL_PIPELINE_RS_STATE_CREATE_INFO;
+ VK_COORDINATE_ORIGIN pointOrigin; // optional (GL45)
+ VK_PROVOKING_VERTEX_CONVENTION provokingVertex; // optional (GL45)
+ VK_FILL_MODE fillMode; // optional (GL45)
+ VK_CULL_MODE cullMode;
+ VK_FACE_ORIENTATION frontFace;
+} VK_PIPELINE_RS_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_MS_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_MS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t samples;
bool32_t multisampleEnable; // optional (GL45)
bool32_t sampleShadingEnable; // optional (GL45)
float minSampleShading; // optional (GL45)
- XGL_SAMPLE_MASK sampleMask;
-} XGL_PIPELINE_MS_STATE_CREATE_INFO;
+ VK_SAMPLE_MASK sampleMask;
+} VK_PIPELINE_MS_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_CB_ATTACHMENT_STATE
+typedef struct _VK_PIPELINE_CB_ATTACHMENT_STATE
{
bool32_t blendEnable;
- XGL_FORMAT format;
- XGL_BLEND srcBlendColor;
- XGL_BLEND destBlendColor;
- XGL_BLEND_FUNC blendFuncColor;
- XGL_BLEND srcBlendAlpha;
- XGL_BLEND destBlendAlpha;
- XGL_BLEND_FUNC blendFuncAlpha;
+ VK_FORMAT format;
+ VK_BLEND srcBlendColor;
+ VK_BLEND destBlendColor;
+ VK_BLEND_FUNC blendFuncColor;
+ VK_BLEND srcBlendAlpha;
+ VK_BLEND destBlendAlpha;
+ VK_BLEND_FUNC blendFuncAlpha;
uint8_t channelWriteMask;
-} XGL_PIPELINE_CB_ATTACHMENT_STATE;
+} VK_PIPELINE_CB_ATTACHMENT_STATE;
-typedef struct _XGL_PIPELINE_CB_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_CB_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
bool32_t alphaToCoverageEnable;
bool32_t logicOpEnable;
- XGL_LOGIC_OP logicOp;
+ VK_LOGIC_OP logicOp;
uint32_t attachmentCount; // # of pAttachments
- const XGL_PIPELINE_CB_ATTACHMENT_STATE* pAttachments;
-} XGL_PIPELINE_CB_STATE_CREATE_INFO;
+ const VK_PIPELINE_CB_ATTACHMENT_STATE* pAttachments;
+} VK_PIPELINE_CB_STATE_CREATE_INFO;
-typedef struct _XGL_STENCIL_OP_STATE
+typedef struct _VK_STENCIL_OP_STATE
{
- XGL_STENCIL_OP stencilFailOp;
- XGL_STENCIL_OP stencilPassOp;
- XGL_STENCIL_OP stencilDepthFailOp;
- XGL_COMPARE_FUNC stencilFunc;
-} XGL_STENCIL_OP_STATE;
+ VK_STENCIL_OP stencilFailOp;
+ VK_STENCIL_OP stencilPassOp;
+ VK_STENCIL_OP stencilDepthFailOp;
+ VK_COMPARE_FUNC stencilFunc;
+} VK_STENCIL_OP_STATE;
-typedef struct _XGL_PIPELINE_DS_STATE_CREATE_INFO
+typedef struct _VK_PIPELINE_DS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_FORMAT format;
+ VK_FORMAT format;
bool32_t depthTestEnable;
bool32_t depthWriteEnable;
- XGL_COMPARE_FUNC depthFunc;
+ VK_COMPARE_FUNC depthFunc;
bool32_t depthBoundsEnable; // optional (depth_bounds_test)
bool32_t stencilTestEnable;
- XGL_STENCIL_OP_STATE front;
- XGL_STENCIL_OP_STATE back;
-} XGL_PIPELINE_DS_STATE_CREATE_INFO;
+ VK_STENCIL_OP_STATE front;
+ VK_STENCIL_OP_STATE back;
+} VK_PIPELINE_DS_STATE_CREATE_INFO;
-typedef struct _XGL_PIPELINE_SHADER_STAGE_CREATE_INFO
+typedef struct _VK_PIPELINE_SHADER_STAGE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_PIPELINE_SHADER shader;
-} XGL_PIPELINE_SHADER_STAGE_CREATE_INFO;
+ VK_PIPELINE_SHADER shader;
+} VK_PIPELINE_SHADER_STAGE_CREATE_INFO;
-typedef struct _XGL_GRAPHICS_PIPELINE_CREATE_INFO
+typedef struct _VK_GRAPHICS_PIPELINE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS flags; // XGL_PIPELINE_CREATE_FLAGS
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN pSetLayoutChain;
-} XGL_GRAPHICS_PIPELINE_CREATE_INFO;
+ VK_FLAGS flags; // VK_PIPELINE_CREATE_FLAGS
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN pSetLayoutChain;
+} VK_GRAPHICS_PIPELINE_CREATE_INFO;
-typedef struct _XGL_SAMPLER_CREATE_INFO
+typedef struct _VK_SAMPLER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SAMPLER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SAMPLER_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_TEX_FILTER magFilter; // Filter mode for magnification
- XGL_TEX_FILTER minFilter; // Filter mode for minifiation
- XGL_TEX_MIPMAP_MODE mipMode; // Mipmap selection mode
- XGL_TEX_ADDRESS addressU;
- XGL_TEX_ADDRESS addressV;
- XGL_TEX_ADDRESS addressW;
+ VK_TEX_FILTER magFilter; // Filter mode for magnification
+ VK_TEX_FILTER minFilter; // Filter mode for minifiation
+ VK_TEX_MIPMAP_MODE mipMode; // Mipmap selection mode
+ VK_TEX_ADDRESS addressU;
+ VK_TEX_ADDRESS addressV;
+ VK_TEX_ADDRESS addressW;
float mipLodBias;
uint32_t maxAnisotropy;
- XGL_COMPARE_FUNC compareFunc;
+ VK_COMPARE_FUNC compareFunc;
float minLod;
float maxLod;
- XGL_BORDER_COLOR_TYPE borderColorType;
-} XGL_SAMPLER_CREATE_INFO;
+ VK_BORDER_COLOR_TYPE borderColorType;
+} VK_SAMPLER_CREATE_INFO;
-typedef struct _XGL_DYNAMIC_VP_STATE_CREATE_INFO
+typedef struct _VK_DYNAMIC_VP_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t viewportAndScissorCount; // number of entries in pViewports and pScissors
- const XGL_VIEWPORT* pViewports;
- const XGL_RECT* pScissors;
-} XGL_DYNAMIC_VP_STATE_CREATE_INFO;
+ const VK_VIEWPORT* pViewports;
+ const VK_RECT* pScissors;
+} VK_DYNAMIC_VP_STATE_CREATE_INFO;
-typedef struct _XGL_DYNAMIC_RS_STATE_CREATE_INFO
+typedef struct _VK_DYNAMIC_RS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
float depthBias;
float depthBiasClamp;
float pointSize; // optional (GL45) - Size of points
float pointFadeThreshold; // optional (GL45) - Size of point fade threshold
float lineWidth; // optional (GL45) - Width of lines
-} XGL_DYNAMIC_RS_STATE_CREATE_INFO;
+} VK_DYNAMIC_RS_STATE_CREATE_INFO;
-typedef struct _XGL_DYNAMIC_CB_STATE_CREATE_INFO
+typedef struct _VK_DYNAMIC_CB_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
float blendConst[4];
-} XGL_DYNAMIC_CB_STATE_CREATE_INFO;
+} VK_DYNAMIC_CB_STATE_CREATE_INFO;
-typedef struct _XGL_DYNAMIC_DS_STATE_CREATE_INFO
+typedef struct _VK_DYNAMIC_DS_STATE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO
const void* pNext; // Pointer to next structure
float minDepth; // optional (depth_bounds_test)
float maxDepth; // optional (depth_bounds_test)
uint32_t stencilWriteMask;
uint32_t stencilFrontRef;
uint32_t stencilBackRef;
-} XGL_DYNAMIC_DS_STATE_CREATE_INFO;
+} VK_DYNAMIC_DS_STATE_CREATE_INFO;
-typedef struct _XGL_CMD_BUFFER_CREATE_INFO
+typedef struct _VK_CMD_BUFFER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t queueNodeIndex;
- XGL_FLAGS flags;
-} XGL_CMD_BUFFER_CREATE_INFO;
+ VK_FLAGS flags;
+} VK_CMD_BUFFER_CREATE_INFO;
-typedef struct _XGL_CMD_BUFFER_BEGIN_INFO
+typedef struct _VK_CMD_BUFFER_BEGIN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_CMD_BUFFER_BEGIN_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS flags; // XGL_CMD_BUFFER_BUILD_FLAGS
-} XGL_CMD_BUFFER_BEGIN_INFO;
+ VK_FLAGS flags; // VK_CMD_BUFFER_BUILD_FLAGS
+} VK_CMD_BUFFER_BEGIN_INFO;
-typedef struct _XGL_RENDER_PASS_BEGIN
+typedef struct _VK_RENDER_PASS_BEGIN
{
- XGL_RENDER_PASS renderPass;
- XGL_FRAMEBUFFER framebuffer;
-} XGL_RENDER_PASS_BEGIN;
+ VK_RENDER_PASS renderPass;
+ VK_FRAMEBUFFER framebuffer;
+} VK_RENDER_PASS_BEGIN;
-typedef struct _XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO
+typedef struct _VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO
const void* pNext; // Pointer to next structure
- XGL_RENDER_PASS_BEGIN renderPassContinue; // Only needed when a render pass is split across two command buffers
-} XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO;
+ VK_RENDER_PASS_BEGIN renderPassContinue; // Only needed when a render pass is split across two command buffers
+} VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO;
// Union allowing specification of floating point or raw color data. Actual value selected is based on image being cleared.
-typedef union _XGL_CLEAR_COLOR_VALUE
+typedef union _VK_CLEAR_COLOR_VALUE
{
float floatColor[4];
uint32_t rawColor[4];
-} XGL_CLEAR_COLOR_VALUE;
+} VK_CLEAR_COLOR_VALUE;
-typedef struct _XGL_CLEAR_COLOR
+typedef struct _VK_CLEAR_COLOR
{
- XGL_CLEAR_COLOR_VALUE color;
+ VK_CLEAR_COLOR_VALUE color;
bool32_t useRawValue;
-} XGL_CLEAR_COLOR;
+} VK_CLEAR_COLOR;
-typedef struct _XGL_RENDER_PASS_CREATE_INFO
+typedef struct _VK_RENDER_PASS_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_RECT renderArea;
+ VK_RECT renderArea;
uint32_t colorAttachmentCount;
- XGL_EXTENT2D extent;
+ VK_EXTENT2D extent;
uint32_t sampleCount;
uint32_t layers;
- const XGL_FORMAT* pColorFormats;
- const XGL_IMAGE_LAYOUT* pColorLayouts;
- const XGL_ATTACHMENT_LOAD_OP* pColorLoadOps;
- const XGL_ATTACHMENT_STORE_OP* pColorStoreOps;
- const XGL_CLEAR_COLOR* pColorLoadClearValues;
- XGL_FORMAT depthStencilFormat;
- XGL_IMAGE_LAYOUT depthStencilLayout;
- XGL_ATTACHMENT_LOAD_OP depthLoadOp;
+ const VK_FORMAT* pColorFormats;
+ const VK_IMAGE_LAYOUT* pColorLayouts;
+ const VK_ATTACHMENT_LOAD_OP* pColorLoadOps;
+ const VK_ATTACHMENT_STORE_OP* pColorStoreOps;
+ const VK_CLEAR_COLOR* pColorLoadClearValues;
+ VK_FORMAT depthStencilFormat;
+ VK_IMAGE_LAYOUT depthStencilLayout;
+ VK_ATTACHMENT_LOAD_OP depthLoadOp;
float depthLoadClearValue;
- XGL_ATTACHMENT_STORE_OP depthStoreOp;
- XGL_ATTACHMENT_LOAD_OP stencilLoadOp;
+ VK_ATTACHMENT_STORE_OP depthStoreOp;
+ VK_ATTACHMENT_LOAD_OP stencilLoadOp;
uint32_t stencilLoadClearValue;
- XGL_ATTACHMENT_STORE_OP stencilStoreOp;
-} XGL_RENDER_PASS_CREATE_INFO;
+ VK_ATTACHMENT_STORE_OP stencilStoreOp;
+} VK_RENDER_PASS_CREATE_INFO;
-typedef struct _XGL_EVENT_CREATE_INFO
+typedef struct _VK_EVENT_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_EVENT_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_EVENT_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_FLAGS flags; // Reserved
-} XGL_EVENT_CREATE_INFO;
+ VK_FLAGS flags; // Reserved
+} VK_EVENT_CREATE_INFO;
-typedef struct _XGL_FENCE_CREATE_INFO
+typedef struct _VK_FENCE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_FENCE_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_FENCE_CREATE_FLAGS flags; // XGL_FENCE_CREATE_FLAGS
-} XGL_FENCE_CREATE_INFO;
+ VK_FENCE_CREATE_FLAGS flags; // VK_FENCE_CREATE_FLAGS
+} VK_FENCE_CREATE_INFO;
-typedef struct _XGL_SEMAPHORE_CREATE_INFO
+typedef struct _VK_SEMAPHORE_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t initialCount;
- XGL_FLAGS flags; // XGL_SEMAPHORE_CREATE_FLAGS
-} XGL_SEMAPHORE_CREATE_INFO;
+ VK_FLAGS flags; // VK_SEMAPHORE_CREATE_FLAGS
+} VK_SEMAPHORE_CREATE_INFO;
-typedef struct _XGL_SEMAPHORE_OPEN_INFO
+typedef struct _VK_SEMAPHORE_OPEN_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_SEMAPHORE_OPEN_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_SEMAPHORE_OPEN_INFO
const void* pNext; // Pointer to next structure
- XGL_SEMAPHORE sharedSemaphore;
-} XGL_SEMAPHORE_OPEN_INFO;
+ VK_SEMAPHORE sharedSemaphore;
+} VK_SEMAPHORE_OPEN_INFO;
-typedef struct _XGL_PIPELINE_STATISTICS_DATA
+typedef struct _VK_PIPELINE_STATISTICS_DATA
{
uint64_t fsInvocations; // Fragment shader invocations
uint64_t cPrimitives; // Clipper primitives
uint64_t tcsInvocations; // Tessellation control shader invocations
uint64_t tesInvocations; // Tessellation evaluation shader invocations
uint64_t csInvocations; // Compute shader invocations
-} XGL_PIPELINE_STATISTICS_DATA;
+} VK_PIPELINE_STATISTICS_DATA;
-typedef struct _XGL_QUERY_POOL_CREATE_INFO
+typedef struct _VK_QUERY_POOL_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO
const void* pNext; // Pointer to next structure
- XGL_QUERY_TYPE queryType;
+ VK_QUERY_TYPE queryType;
uint32_t slots;
-} XGL_QUERY_POOL_CREATE_INFO;
+} VK_QUERY_POOL_CREATE_INFO;
-typedef struct _XGL_FRAMEBUFFER_CREATE_INFO
+typedef struct _VK_FRAMEBUFFER_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO
const void* pNext; // Pointer to next structure
uint32_t colorAttachmentCount;
- const XGL_COLOR_ATTACHMENT_BIND_INFO* pColorAttachments;
- const XGL_DEPTH_STENCIL_BIND_INFO* pDepthStencilAttachment;
+ const VK_COLOR_ATTACHMENT_BIND_INFO* pColorAttachments;
+ const VK_DEPTH_STENCIL_BIND_INFO* pDepthStencilAttachment;
uint32_t sampleCount;
uint32_t width;
uint32_t height;
uint32_t layers;
-} XGL_FRAMEBUFFER_CREATE_INFO;
+} VK_FRAMEBUFFER_CREATE_INFO;
-typedef struct _XGL_DRAW_INDIRECT_CMD
+typedef struct _VK_DRAW_INDIRECT_CMD
{
uint32_t vertexCount;
uint32_t instanceCount;
uint32_t firstVertex;
uint32_t firstInstance;
-} XGL_DRAW_INDIRECT_CMD;
+} VK_DRAW_INDIRECT_CMD;
-typedef struct _XGL_DRAW_INDEXED_INDIRECT_CMD
+typedef struct _VK_DRAW_INDEXED_INDIRECT_CMD
{
uint32_t indexCount;
uint32_t instanceCount;
uint32_t firstIndex;
int32_t vertexOffset;
uint32_t firstInstance;
-} XGL_DRAW_INDEXED_INDIRECT_CMD;
+} VK_DRAW_INDEXED_INDIRECT_CMD;
-typedef struct _XGL_DISPATCH_INDIRECT_CMD
+typedef struct _VK_DISPATCH_INDIRECT_CMD
{
uint32_t x;
uint32_t y;
uint32_t z;
-} XGL_DISPATCH_INDIRECT_CMD;
+} VK_DISPATCH_INDIRECT_CMD;
// ------------------------------------------------------------------------------------------------
// API functions
-typedef XGL_RESULT (XGLAPI *xglCreateInstanceType)(const XGL_INSTANCE_CREATE_INFO* pCreateInfo, XGL_INSTANCE* pInstance);
-typedef XGL_RESULT (XGLAPI *xglDestroyInstanceType)(XGL_INSTANCE instance);
-typedef XGL_RESULT (XGLAPI *xglEnumerateGpusType)(XGL_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, XGL_PHYSICAL_GPU* pGpus);
-typedef XGL_RESULT (XGLAPI *xglGetGpuInfoType)(XGL_PHYSICAL_GPU gpu, XGL_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData);
-typedef void * (XGLAPI *xglGetProcAddrType)(XGL_PHYSICAL_GPU gpu, const char * pName);
-typedef XGL_RESULT (XGLAPI *xglCreateDeviceType)(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice);
-typedef XGL_RESULT (XGLAPI *xglDestroyDeviceType)(XGL_DEVICE device);
-typedef XGL_RESULT (XGLAPI *xglGetExtensionSupportType)(XGL_PHYSICAL_GPU gpu, const char* pExtName);
-typedef XGL_RESULT (XGLAPI *xglEnumerateLayersType)(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved);
-typedef XGL_RESULT (XGLAPI *xglGetDeviceQueueType)(XGL_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, XGL_QUEUE* pQueue);
-typedef XGL_RESULT (XGLAPI *xglQueueSubmitType)(XGL_QUEUE queue, uint32_t cmdBufferCount, const XGL_CMD_BUFFER* pCmdBuffers, XGL_FENCE fence);
-typedef XGL_RESULT (XGLAPI *xglQueueAddMemReferenceType)(XGL_QUEUE queue, XGL_GPU_MEMORY mem);
-typedef XGL_RESULT (XGLAPI *xglQueueRemoveMemReferenceType)(XGL_QUEUE queue, XGL_GPU_MEMORY mem);
-typedef XGL_RESULT (XGLAPI *xglQueueWaitIdleType)(XGL_QUEUE queue);
-typedef XGL_RESULT (XGLAPI *xglDeviceWaitIdleType)(XGL_DEVICE device);
-typedef XGL_RESULT (XGLAPI *xglAllocMemoryType)(XGL_DEVICE device, const XGL_MEMORY_ALLOC_INFO* pAllocInfo, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglFreeMemoryType)(XGL_GPU_MEMORY mem);
-typedef XGL_RESULT (XGLAPI *xglSetMemoryPriorityType)(XGL_GPU_MEMORY mem, XGL_MEMORY_PRIORITY priority);
-typedef XGL_RESULT (XGLAPI *xglMapMemoryType)(XGL_GPU_MEMORY mem, XGL_FLAGS flags, void** ppData);
-typedef XGL_RESULT (XGLAPI *xglUnmapMemoryType)(XGL_GPU_MEMORY mem);
-typedef XGL_RESULT (XGLAPI *xglPinSystemMemoryType)(XGL_DEVICE device, const void* pSysMem, size_t memSize, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglGetMultiGpuCompatibilityType)(XGL_PHYSICAL_GPU gpu0, XGL_PHYSICAL_GPU gpu1, XGL_GPU_COMPATIBILITY_INFO* pInfo);
-typedef XGL_RESULT (XGLAPI *xglOpenSharedMemoryType)(XGL_DEVICE device, const XGL_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglOpenSharedSemaphoreType)(XGL_DEVICE device, const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo, XGL_SEMAPHORE* pSemaphore);
-typedef XGL_RESULT (XGLAPI *xglOpenPeerMemoryType)(XGL_DEVICE device, const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglOpenPeerImageType)(XGL_DEVICE device, const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglDestroyObjectType)(XGL_OBJECT object);
-typedef XGL_RESULT (XGLAPI *xglGetObjectInfoType)(XGL_BASE_OBJECT object, XGL_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglBindObjectMemoryType)(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_MEMORY mem, XGL_GPU_SIZE offset);
-typedef XGL_RESULT (XGLAPI *xglBindObjectMemoryRangeType)(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_SIZE rangeOffset,XGL_GPU_SIZE rangeSize, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset);
-typedef XGL_RESULT (XGLAPI *xglBindImageMemoryRangeType)(XGL_IMAGE image, uint32_t allocationIdx, const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset);
-typedef XGL_RESULT (XGLAPI *xglCreateFenceType)(XGL_DEVICE device, const XGL_FENCE_CREATE_INFO* pCreateInfo, XGL_FENCE* pFence);
-typedef XGL_RESULT (XGLAPI *xglResetFencesType)(XGL_DEVICE device, uint32_t fenceCount, XGL_FENCE* pFences);
-typedef XGL_RESULT (XGLAPI *xglGetFenceStatusType)(XGL_FENCE fence);
-typedef XGL_RESULT (XGLAPI *xglWaitForFencesType)(XGL_DEVICE device, uint32_t fenceCount, const XGL_FENCE* pFences, bool32_t waitAll, uint64_t timeout);
-typedef XGL_RESULT (XGLAPI *xglCreateSemaphoreType)(XGL_DEVICE device, const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo, XGL_SEMAPHORE* pSemaphore);
-typedef XGL_RESULT (XGLAPI *xglQueueSignalSemaphoreType)(XGL_QUEUE queue, XGL_SEMAPHORE semaphore);
-typedef XGL_RESULT (XGLAPI *xglQueueWaitSemaphoreType)(XGL_QUEUE queue, XGL_SEMAPHORE semaphore);
-typedef XGL_RESULT (XGLAPI *xglCreateEventType)(XGL_DEVICE device, const XGL_EVENT_CREATE_INFO* pCreateInfo, XGL_EVENT* pEvent);
-typedef XGL_RESULT (XGLAPI *xglGetEventStatusType)(XGL_EVENT event);
-typedef XGL_RESULT (XGLAPI *xglSetEventType)(XGL_EVENT event);
-typedef XGL_RESULT (XGLAPI *xglResetEventType)(XGL_EVENT event);
-typedef XGL_RESULT (XGLAPI *xglCreateQueryPoolType)(XGL_DEVICE device, const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo, XGL_QUERY_POOL* pQueryPool);
-typedef XGL_RESULT (XGLAPI *xglGetQueryPoolResultsType)(XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglGetFormatInfoType)(XGL_DEVICE device, XGL_FORMAT format, XGL_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglCreateBufferType)(XGL_DEVICE device, const XGL_BUFFER_CREATE_INFO* pCreateInfo, XGL_BUFFER* pBuffer);
-typedef XGL_RESULT (XGLAPI *xglCreateBufferViewType)(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView);
-typedef XGL_RESULT (XGLAPI *xglCreateImageType)(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage);
-typedef XGL_RESULT (XGLAPI *xglGetImageSubresourceInfoType)(XGL_IMAGE image, const XGL_IMAGE_SUBRESOURCE* pSubresource, XGL_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglCreateImageViewType)(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView);
-typedef XGL_RESULT (XGLAPI *xglCreateColorAttachmentViewType)(XGL_DEVICE device, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, XGL_COLOR_ATTACHMENT_VIEW* pView);
-typedef XGL_RESULT (XGLAPI *xglCreateDepthStencilViewType)(XGL_DEVICE device, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, XGL_DEPTH_STENCIL_VIEW* pView);
-typedef XGL_RESULT (XGLAPI *xglCreateShaderType)(XGL_DEVICE device, const XGL_SHADER_CREATE_INFO* pCreateInfo, XGL_SHADER* pShader);
-typedef XGL_RESULT (XGLAPI *xglCreateGraphicsPipelineType)(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglCreateGraphicsPipelineDerivativeType)(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE basePipeline, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglCreateComputePipelineType)(XGL_DEVICE device, const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglStorePipelineType)(XGL_PIPELINE pipeline, size_t* pDataSize, void* pData);
-typedef XGL_RESULT (XGLAPI *xglLoadPipelineType)(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglLoadPipelineDerivativeType)(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE basePipeline, XGL_PIPELINE* pPipeline);
-typedef XGL_RESULT (XGLAPI *xglCreateSamplerType)(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler);
-typedef XGL_RESULT (XGLAPI *xglCreateDescriptorSetLayoutType)(XGL_DEVICE device, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout);
-typedef XGL_RESULT (XGLAPI *xglCreateDescriptorSetLayoutChainType)(XGL_DEVICE device, uint32_t setLayoutArrayCount, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain);
-typedef XGL_RESULT (XGLAPI *xglBeginDescriptorPoolUpdateType)(XGL_DEVICE device, XGL_DESCRIPTOR_UPDATE_MODE updateMode);
-typedef XGL_RESULT (XGLAPI *xglEndDescriptorPoolUpdateType)(XGL_DEVICE device, XGL_CMD_BUFFER cmd);
-typedef XGL_RESULT (XGLAPI *xglCreateDescriptorPoolType)(XGL_DEVICE device, XGL_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_POOL* pDescriptorPool);
-typedef XGL_RESULT (XGLAPI *xglResetDescriptorPoolType)(XGL_DESCRIPTOR_POOL descriptorPool);
-typedef XGL_RESULT (XGLAPI *xglAllocDescriptorSetsType)(XGL_DESCRIPTOR_POOL descriptorPool, XGL_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts, XGL_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount);
-typedef void (XGLAPI *xglClearDescriptorSetsType)(XGL_DESCRIPTOR_POOL descriptorPool, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets);
-typedef void (XGLAPI *xglUpdateDescriptorsType)(XGL_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray);
-typedef XGL_RESULT (XGLAPI *xglCreateDynamicViewportStateType)(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_VP_STATE_OBJECT* pState);
-typedef XGL_RESULT (XGLAPI *xglCreateDynamicRasterStateType)(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_RS_STATE_OBJECT* pState);
-typedef XGL_RESULT (XGLAPI *xglCreateDynamicColorBlendStateType)(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_CB_STATE_OBJECT* pState);
-typedef XGL_RESULT (XGLAPI *xglCreateDynamicDepthStencilStateType)(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_DS_STATE_OBJECT* pState);
-typedef XGL_RESULT (XGLAPI *xglCreateCommandBufferType)(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer);
-typedef XGL_RESULT (XGLAPI *xglBeginCommandBufferType)(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
-typedef XGL_RESULT (XGLAPI *xglEndCommandBufferType)(XGL_CMD_BUFFER cmdBuffer);
-typedef XGL_RESULT (XGLAPI *xglResetCommandBufferType)(XGL_CMD_BUFFER cmdBuffer);
-typedef void (XGLAPI *xglCmdBindPipelineType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline);
-typedef void (XGLAPI *xglCmdBindDynamicStateObjectType)(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state);
-typedef void (XGLAPI *xglCmdBindDescriptorSetsType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData);
-typedef void (XGLAPI *xglCmdBindIndexBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType);
-typedef void (XGLAPI *xglCmdBindVertexBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding);
-typedef void (XGLAPI *xglCmdDrawType)(XGL_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount);
-typedef void (XGLAPI *xglCmdDrawIndexedType)(XGL_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount);
-typedef void (XGLAPI *xglCmdDrawIndirectType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride);
-typedef void (XGLAPI *xglCmdDrawIndexedIndirectType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride);
-typedef void (XGLAPI *xglCmdDispatchType)(XGL_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z);
-typedef void (XGLAPI *xglCmdDispatchIndirectType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset);
-typedef void (XGLAPI *xglCmdCopyBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_COPY* pRegions);
-typedef void (XGLAPI *xglCmdCopyImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_COPY* pRegions);
-typedef void (XGLAPI *xglCmdBlitImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_BLIT* pRegions);
-typedef void (XGLAPI *xglCmdCopyBufferToImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions);
-typedef void (XGLAPI *xglCmdCopyImageToBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions);
-typedef void (XGLAPI *xglCmdCloneImageDataType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout);
-typedef void (XGLAPI *xglCmdUpdateBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData);
-typedef void (XGLAPI *xglCmdFillBufferType)(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data);
-typedef void (XGLAPI *xglCmdClearColorImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, XGL_CLEAR_COLOR color, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges);
-typedef void (XGLAPI *xglCmdClearDepthStencilType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges);
-typedef void (XGLAPI *xglCmdResolveImageType)(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects);
-typedef void (XGLAPI *xglCmdSetEventType)(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent);
-typedef void (XGLAPI *xglCmdResetEventType)(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent);
-typedef void (XGLAPI *xglCmdWaitEventsType)(XGL_CMD_BUFFER cmdBuffer, const XGL_EVENT_WAIT_INFO* pWaitInfo);
-typedef void (XGLAPI *xglCmdPipelineBarrierType)(XGL_CMD_BUFFER cmdBuffer, const XGL_PIPELINE_BARRIER* pBarrier);
-typedef void (XGLAPI *xglCmdBeginQueryType)(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags);
-typedef void (XGLAPI *xglCmdEndQueryType)(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot);
-typedef void (XGLAPI *xglCmdResetQueryPoolType)(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount);
-typedef void (XGLAPI *xglCmdWriteTimestampType)(XGL_CMD_BUFFER cmdBuffer, XGL_TIMESTAMP_TYPE timestampType, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset);
-typedef void (XGLAPI *xglCmdInitAtomicCountersType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData);
-typedef void (XGLAPI *xglCmdLoadAtomicCountersType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER srcBuffer, XGL_GPU_SIZE srcOffset);
-typedef void (XGLAPI *xglCmdSaveAtomicCountersType)(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset);
-typedef XGL_RESULT (XGLAPI *xglCreateFramebufferType)(XGL_DEVICE device, const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo, XGL_FRAMEBUFFER* pFramebuffer);
-typedef XGL_RESULT (XGLAPI *xglCreateRenderPassType)(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, XGL_RENDER_PASS* pRenderPass);
-typedef void (XGLAPI *xglCmdBeginRenderPassType)(XGL_CMD_BUFFER cmdBuffer, const XGL_RENDER_PASS_BEGIN* pRenderPassBegin);
-typedef void (XGLAPI *xglCmdEndRenderPassType)(XGL_CMD_BUFFER cmdBuffer, XGL_RENDER_PASS renderPass);
-
-#ifdef XGL_PROTOTYPES
+typedef VK_RESULT (VKAPI *vkCreateInstanceType)(const VK_INSTANCE_CREATE_INFO* pCreateInfo, VK_INSTANCE* pInstance);
+typedef VK_RESULT (VKAPI *vkDestroyInstanceType)(VK_INSTANCE instance);
+typedef VK_RESULT (VKAPI *vkEnumerateGpusType)(VK_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, VK_PHYSICAL_GPU* pGpus);
+typedef VK_RESULT (VKAPI *vkGetGpuInfoType)(VK_PHYSICAL_GPU gpu, VK_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData);
+typedef void * (VKAPI *vkGetProcAddrType)(VK_PHYSICAL_GPU gpu, const char * pName);
+typedef VK_RESULT (VKAPI *vkCreateDeviceType)(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice);
+typedef VK_RESULT (VKAPI *vkDestroyDeviceType)(VK_DEVICE device);
+typedef VK_RESULT (VKAPI *vkGetExtensionSupportType)(VK_PHYSICAL_GPU gpu, const char* pExtName);
+typedef VK_RESULT (VKAPI *vkEnumerateLayersType)(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved);
+typedef VK_RESULT (VKAPI *vkGetDeviceQueueType)(VK_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, VK_QUEUE* pQueue);
+typedef VK_RESULT (VKAPI *vkQueueSubmitType)(VK_QUEUE queue, uint32_t cmdBufferCount, const VK_CMD_BUFFER* pCmdBuffers, VK_FENCE fence);
+typedef VK_RESULT (VKAPI *vkQueueAddMemReferenceType)(VK_QUEUE queue, VK_GPU_MEMORY mem);
+typedef VK_RESULT (VKAPI *vkQueueRemoveMemReferenceType)(VK_QUEUE queue, VK_GPU_MEMORY mem);
+typedef VK_RESULT (VKAPI *vkQueueWaitIdleType)(VK_QUEUE queue);
+typedef VK_RESULT (VKAPI *vkDeviceWaitIdleType)(VK_DEVICE device);
+typedef VK_RESULT (VKAPI *vkAllocMemoryType)(VK_DEVICE device, const VK_MEMORY_ALLOC_INFO* pAllocInfo, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkFreeMemoryType)(VK_GPU_MEMORY mem);
+typedef VK_RESULT (VKAPI *vkSetMemoryPriorityType)(VK_GPU_MEMORY mem, VK_MEMORY_PRIORITY priority);
+typedef VK_RESULT (VKAPI *vkMapMemoryType)(VK_GPU_MEMORY mem, VK_FLAGS flags, void** ppData);
+typedef VK_RESULT (VKAPI *vkUnmapMemoryType)(VK_GPU_MEMORY mem);
+typedef VK_RESULT (VKAPI *vkPinSystemMemoryType)(VK_DEVICE device, const void* pSysMem, size_t memSize, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkGetMultiGpuCompatibilityType)(VK_PHYSICAL_GPU gpu0, VK_PHYSICAL_GPU gpu1, VK_GPU_COMPATIBILITY_INFO* pInfo);
+typedef VK_RESULT (VKAPI *vkOpenSharedMemoryType)(VK_DEVICE device, const VK_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkOpenSharedSemaphoreType)(VK_DEVICE device, const VK_SEMAPHORE_OPEN_INFO* pOpenInfo, VK_SEMAPHORE* pSemaphore);
+typedef VK_RESULT (VKAPI *vkOpenPeerMemoryType)(VK_DEVICE device, const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkOpenPeerImageType)(VK_DEVICE device, const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkDestroyObjectType)(VK_OBJECT object);
+typedef VK_RESULT (VKAPI *vkGetObjectInfoType)(VK_BASE_OBJECT object, VK_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkBindObjectMemoryType)(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_MEMORY mem, VK_GPU_SIZE offset);
+typedef VK_RESULT (VKAPI *vkBindObjectMemoryRangeType)(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_SIZE rangeOffset,VK_GPU_SIZE rangeSize, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset);
+typedef VK_RESULT (VKAPI *vkBindImageMemoryRangeType)(VK_IMAGE image, uint32_t allocationIdx, const VK_IMAGE_MEMORY_BIND_INFO* bindInfo, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset);
+typedef VK_RESULT (VKAPI *vkCreateFenceType)(VK_DEVICE device, const VK_FENCE_CREATE_INFO* pCreateInfo, VK_FENCE* pFence);
+typedef VK_RESULT (VKAPI *vkResetFencesType)(VK_DEVICE device, uint32_t fenceCount, VK_FENCE* pFences);
+typedef VK_RESULT (VKAPI *vkGetFenceStatusType)(VK_FENCE fence);
+typedef VK_RESULT (VKAPI *vkWaitForFencesType)(VK_DEVICE device, uint32_t fenceCount, const VK_FENCE* pFences, bool32_t waitAll, uint64_t timeout);
+typedef VK_RESULT (VKAPI *vkCreateSemaphoreType)(VK_DEVICE device, const VK_SEMAPHORE_CREATE_INFO* pCreateInfo, VK_SEMAPHORE* pSemaphore);
+typedef VK_RESULT (VKAPI *vkQueueSignalSemaphoreType)(VK_QUEUE queue, VK_SEMAPHORE semaphore);
+typedef VK_RESULT (VKAPI *vkQueueWaitSemaphoreType)(VK_QUEUE queue, VK_SEMAPHORE semaphore);
+typedef VK_RESULT (VKAPI *vkCreateEventType)(VK_DEVICE device, const VK_EVENT_CREATE_INFO* pCreateInfo, VK_EVENT* pEvent);
+typedef VK_RESULT (VKAPI *vkGetEventStatusType)(VK_EVENT event);
+typedef VK_RESULT (VKAPI *vkSetEventType)(VK_EVENT event);
+typedef VK_RESULT (VKAPI *vkResetEventType)(VK_EVENT event);
+typedef VK_RESULT (VKAPI *vkCreateQueryPoolType)(VK_DEVICE device, const VK_QUERY_POOL_CREATE_INFO* pCreateInfo, VK_QUERY_POOL* pQueryPool);
+typedef VK_RESULT (VKAPI *vkGetQueryPoolResultsType)(VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkGetFormatInfoType)(VK_DEVICE device, VK_FORMAT format, VK_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkCreateBufferType)(VK_DEVICE device, const VK_BUFFER_CREATE_INFO* pCreateInfo, VK_BUFFER* pBuffer);
+typedef VK_RESULT (VKAPI *vkCreateBufferViewType)(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView);
+typedef VK_RESULT (VKAPI *vkCreateImageType)(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage);
+typedef VK_RESULT (VKAPI *vkGetImageSubresourceInfoType)(VK_IMAGE image, const VK_IMAGE_SUBRESOURCE* pSubresource, VK_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkCreateImageViewType)(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView);
+typedef VK_RESULT (VKAPI *vkCreateColorAttachmentViewType)(VK_DEVICE device, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, VK_COLOR_ATTACHMENT_VIEW* pView);
+typedef VK_RESULT (VKAPI *vkCreateDepthStencilViewType)(VK_DEVICE device, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, VK_DEPTH_STENCIL_VIEW* pView);
+typedef VK_RESULT (VKAPI *vkCreateShaderType)(VK_DEVICE device, const VK_SHADER_CREATE_INFO* pCreateInfo, VK_SHADER* pShader);
+typedef VK_RESULT (VKAPI *vkCreateGraphicsPipelineType)(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkCreateGraphicsPipelineDerivativeType)(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE basePipeline, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkCreateComputePipelineType)(VK_DEVICE device, const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkStorePipelineType)(VK_PIPELINE pipeline, size_t* pDataSize, void* pData);
+typedef VK_RESULT (VKAPI *vkLoadPipelineType)(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkLoadPipelineDerivativeType)(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE basePipeline, VK_PIPELINE* pPipeline);
+typedef VK_RESULT (VKAPI *vkCreateSamplerType)(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler);
+typedef VK_RESULT (VKAPI *vkCreateDescriptorSetLayoutType)(VK_DEVICE device, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_SET_LAYOUT* pSetLayout);
+typedef VK_RESULT (VKAPI *vkCreateDescriptorSetLayoutChainType)(VK_DEVICE device, uint32_t setLayoutArrayCount, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain);
+typedef VK_RESULT (VKAPI *vkBeginDescriptorPoolUpdateType)(VK_DEVICE device, VK_DESCRIPTOR_UPDATE_MODE updateMode);
+typedef VK_RESULT (VKAPI *vkEndDescriptorPoolUpdateType)(VK_DEVICE device, VK_CMD_BUFFER cmd);
+typedef VK_RESULT (VKAPI *vkCreateDescriptorPoolType)(VK_DEVICE device, VK_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_POOL* pDescriptorPool);
+typedef VK_RESULT (VKAPI *vkResetDescriptorPoolType)(VK_DESCRIPTOR_POOL descriptorPool);
+typedef VK_RESULT (VKAPI *vkAllocDescriptorSetsType)(VK_DESCRIPTOR_POOL descriptorPool, VK_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts, VK_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount);
+typedef void (VKAPI *vkClearDescriptorSetsType)(VK_DESCRIPTOR_POOL descriptorPool, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets);
+typedef void (VKAPI *vkUpdateDescriptorsType)(VK_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray);
+typedef VK_RESULT (VKAPI *vkCreateDynamicViewportStateType)(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_VP_STATE_OBJECT* pState);
+typedef VK_RESULT (VKAPI *vkCreateDynamicRasterStateType)(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_RS_STATE_OBJECT* pState);
+typedef VK_RESULT (VKAPI *vkCreateDynamicColorBlendStateType)(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_CB_STATE_OBJECT* pState);
+typedef VK_RESULT (VKAPI *vkCreateDynamicDepthStencilStateType)(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_DS_STATE_OBJECT* pState);
+typedef VK_RESULT (VKAPI *vkCreateCommandBufferType)(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer);
+typedef VK_RESULT (VKAPI *vkBeginCommandBufferType)(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
+typedef VK_RESULT (VKAPI *vkEndCommandBufferType)(VK_CMD_BUFFER cmdBuffer);
+typedef VK_RESULT (VKAPI *vkResetCommandBufferType)(VK_CMD_BUFFER cmdBuffer);
+typedef void (VKAPI *vkCmdBindPipelineType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline);
+typedef void (VKAPI *vkCmdBindDynamicStateObjectType)(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state);
+typedef void (VKAPI *vkCmdBindDescriptorSetsType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData);
+typedef void (VKAPI *vkCmdBindIndexBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType);
+typedef void (VKAPI *vkCmdBindVertexBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding);
+typedef void (VKAPI *vkCmdDrawType)(VK_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount);
+typedef void (VKAPI *vkCmdDrawIndexedType)(VK_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount);
+typedef void (VKAPI *vkCmdDrawIndirectType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride);
+typedef void (VKAPI *vkCmdDrawIndexedIndirectType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride);
+typedef void (VKAPI *vkCmdDispatchType)(VK_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z);
+typedef void (VKAPI *vkCmdDispatchIndirectType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset);
+typedef void (VKAPI *vkCmdCopyBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_COPY* pRegions);
+typedef void (VKAPI *vkCmdCopyImageType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_COPY* pRegions);
+typedef void (VKAPI *vkCmdBlitImageType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_BLIT* pRegions);
+typedef void (VKAPI *vkCmdCopyBufferToImageType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions);
+typedef void (VKAPI *vkCmdCopyImageToBufferType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions);
+typedef void (VKAPI *vkCmdCloneImageDataType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout);
+typedef void (VKAPI *vkCmdUpdateBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData);
+typedef void (VKAPI *vkCmdFillBufferType)(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data);
+typedef void (VKAPI *vkCmdClearColorImageType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, VK_CLEAR_COLOR color, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges);
+typedef void (VKAPI *vkCmdClearDepthStencilType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges);
+typedef void (VKAPI *vkCmdResolveImageType)(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects);
+typedef void (VKAPI *vkCmdSetEventType)(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent);
+typedef void (VKAPI *vkCmdResetEventType)(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent);
+typedef void (VKAPI *vkCmdWaitEventsType)(VK_CMD_BUFFER cmdBuffer, const VK_EVENT_WAIT_INFO* pWaitInfo);
+typedef void (VKAPI *vkCmdPipelineBarrierType)(VK_CMD_BUFFER cmdBuffer, const VK_PIPELINE_BARRIER* pBarrier);
+typedef void (VKAPI *vkCmdBeginQueryType)(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags);
+typedef void (VKAPI *vkCmdEndQueryType)(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot);
+typedef void (VKAPI *vkCmdResetQueryPoolType)(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount);
+typedef void (VKAPI *vkCmdWriteTimestampType)(VK_CMD_BUFFER cmdBuffer, VK_TIMESTAMP_TYPE timestampType, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset);
+typedef void (VKAPI *vkCmdInitAtomicCountersType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData);
+typedef void (VKAPI *vkCmdLoadAtomicCountersType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER srcBuffer, VK_GPU_SIZE srcOffset);
+typedef void (VKAPI *vkCmdSaveAtomicCountersType)(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset);
+typedef VK_RESULT (VKAPI *vkCreateFramebufferType)(VK_DEVICE device, const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo, VK_FRAMEBUFFER* pFramebuffer);
+typedef VK_RESULT (VKAPI *vkCreateRenderPassType)(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo, VK_RENDER_PASS* pRenderPass);
+typedef void (VKAPI *vkCmdBeginRenderPassType)(VK_CMD_BUFFER cmdBuffer, const VK_RENDER_PASS_BEGIN* pRenderPassBegin);
+typedef void (VKAPI *vkCmdEndRenderPassType)(VK_CMD_BUFFER cmdBuffer, VK_RENDER_PASS renderPass);
+
+#ifdef VK_PROTOTYPES
// GPU initialization
-XGL_RESULT XGLAPI xglCreateInstance(
- const XGL_INSTANCE_CREATE_INFO* pCreateInfo,
- XGL_INSTANCE* pInstance);
+VK_RESULT VKAPI vkCreateInstance(
+ const VK_INSTANCE_CREATE_INFO* pCreateInfo,
+ VK_INSTANCE* pInstance);
-XGL_RESULT XGLAPI xglDestroyInstance(
- XGL_INSTANCE instance);
+VK_RESULT VKAPI vkDestroyInstance(
+ VK_INSTANCE instance);
-XGL_RESULT XGLAPI xglEnumerateGpus(
- XGL_INSTANCE instance,
+VK_RESULT VKAPI vkEnumerateGpus(
+ VK_INSTANCE instance,
uint32_t maxGpus,
uint32_t* pGpuCount,
- XGL_PHYSICAL_GPU* pGpus);
+ VK_PHYSICAL_GPU* pGpus);
-XGL_RESULT XGLAPI xglGetGpuInfo(
- XGL_PHYSICAL_GPU gpu,
- XGL_PHYSICAL_GPU_INFO_TYPE infoType,
+VK_RESULT VKAPI vkGetGpuInfo(
+ VK_PHYSICAL_GPU gpu,
+ VK_PHYSICAL_GPU_INFO_TYPE infoType,
size_t* pDataSize,
void* pData);
-void * XGLAPI xglGetProcAddr(
- XGL_PHYSICAL_GPU gpu,
+void * VKAPI vkGetProcAddr(
+ VK_PHYSICAL_GPU gpu,
const char* pName);
// Device functions
-XGL_RESULT XGLAPI xglCreateDevice(
- XGL_PHYSICAL_GPU gpu,
- const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice);
+VK_RESULT VKAPI vkCreateDevice(
+ VK_PHYSICAL_GPU gpu,
+ const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice);
-XGL_RESULT XGLAPI xglDestroyDevice(
- XGL_DEVICE device);
+VK_RESULT VKAPI vkDestroyDevice(
+ VK_DEVICE device);
// Extension discovery functions
-XGL_RESULT XGLAPI xglGetExtensionSupport(
- XGL_PHYSICAL_GPU gpu,
+VK_RESULT VKAPI vkGetExtensionSupport(
+ VK_PHYSICAL_GPU gpu,
const char* pExtName);
// Layer discovery functions
-XGL_RESULT XGLAPI xglEnumerateLayers(
- XGL_PHYSICAL_GPU gpu,
+VK_RESULT VKAPI vkEnumerateLayers(
+ VK_PHYSICAL_GPU gpu,
size_t maxLayerCount,
size_t maxStringSize,
size_t* pOutLayerCount,
// Queue functions
-XGL_RESULT XGLAPI xglGetDeviceQueue(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkGetDeviceQueue(
+ VK_DEVICE device,
uint32_t queueNodeIndex,
uint32_t queueIndex,
- XGL_QUEUE* pQueue);
+ VK_QUEUE* pQueue);
-XGL_RESULT XGLAPI xglQueueSubmit(
- XGL_QUEUE queue,
+VK_RESULT VKAPI vkQueueSubmit(
+ VK_QUEUE queue,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER* pCmdBuffers,
- XGL_FENCE fence);
+ const VK_CMD_BUFFER* pCmdBuffers,
+ VK_FENCE fence);
-XGL_RESULT XGLAPI xglQueueAddMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem);
+VK_RESULT VKAPI vkQueueAddMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem);
-XGL_RESULT XGLAPI xglQueueRemoveMemReference(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem);
+VK_RESULT VKAPI vkQueueRemoveMemReference(
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem);
-XGL_RESULT XGLAPI xglQueueWaitIdle(
- XGL_QUEUE queue);
+VK_RESULT VKAPI vkQueueWaitIdle(
+ VK_QUEUE queue);
-XGL_RESULT XGLAPI xglDeviceWaitIdle(
- XGL_DEVICE device);
+VK_RESULT VKAPI vkDeviceWaitIdle(
+ VK_DEVICE device);
// Memory functions
-XGL_RESULT XGLAPI xglAllocMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_ALLOC_INFO* pAllocInfo,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkAllocMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_ALLOC_INFO* pAllocInfo,
+ VK_GPU_MEMORY* pMem);
-XGL_RESULT XGLAPI xglFreeMemory(
- XGL_GPU_MEMORY mem);
+VK_RESULT VKAPI vkFreeMemory(
+ VK_GPU_MEMORY mem);
-XGL_RESULT XGLAPI xglSetMemoryPriority(
- XGL_GPU_MEMORY mem,
- XGL_MEMORY_PRIORITY priority);
+VK_RESULT VKAPI vkSetMemoryPriority(
+ VK_GPU_MEMORY mem,
+ VK_MEMORY_PRIORITY priority);
-XGL_RESULT XGLAPI xglMapMemory(
- XGL_GPU_MEMORY mem,
- XGL_FLAGS flags, // Reserved
+VK_RESULT VKAPI vkMapMemory(
+ VK_GPU_MEMORY mem,
+ VK_FLAGS flags, // Reserved
void** ppData);
-XGL_RESULT XGLAPI xglUnmapMemory(
- XGL_GPU_MEMORY mem);
+VK_RESULT VKAPI vkUnmapMemory(
+ VK_GPU_MEMORY mem);
-XGL_RESULT XGLAPI xglPinSystemMemory(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkPinSystemMemory(
+ VK_DEVICE device,
const void* pSysMem,
size_t memSize,
- XGL_GPU_MEMORY* pMem);
+ VK_GPU_MEMORY* pMem);
// Multi-device functions
-XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(
- XGL_PHYSICAL_GPU gpu0,
- XGL_PHYSICAL_GPU gpu1,
- XGL_GPU_COMPATIBILITY_INFO* pInfo);
+VK_RESULT VKAPI vkGetMultiGpuCompatibility(
+ VK_PHYSICAL_GPU gpu0,
+ VK_PHYSICAL_GPU gpu1,
+ VK_GPU_COMPATIBILITY_INFO* pInfo);
-XGL_RESULT XGLAPI xglOpenSharedMemory(
- XGL_DEVICE device,
- const XGL_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkOpenSharedMemory(
+ VK_DEVICE device,
+ const VK_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem);
-XGL_RESULT XGLAPI xglOpenSharedSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo,
- XGL_SEMAPHORE* pSemaphore);
+VK_RESULT VKAPI vkOpenSharedSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_OPEN_INFO* pOpenInfo,
+ VK_SEMAPHORE* pSemaphore);
-XGL_RESULT XGLAPI xglOpenPeerMemory(
- XGL_DEVICE device,
- const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkOpenPeerMemory(
+ VK_DEVICE device,
+ const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo,
+ VK_GPU_MEMORY* pMem);
-XGL_RESULT XGLAPI xglOpenPeerImage(
- XGL_DEVICE device,
- const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkOpenPeerImage(
+ VK_DEVICE device,
+ const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem);
// Generic API object functions
-XGL_RESULT XGLAPI xglDestroyObject(
- XGL_OBJECT object);
+VK_RESULT VKAPI vkDestroyObject(
+ VK_OBJECT object);
-XGL_RESULT XGLAPI xglGetObjectInfo(
- XGL_BASE_OBJECT object,
- XGL_OBJECT_INFO_TYPE infoType,
+VK_RESULT VKAPI vkGetObjectInfo(
+ VK_BASE_OBJECT object,
+ VK_OBJECT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData);
-XGL_RESULT XGLAPI xglBindObjectMemory(
- XGL_OBJECT object,
+VK_RESULT VKAPI vkBindObjectMemory(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset);
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset);
-XGL_RESULT XGLAPI xglBindObjectMemoryRange(
- XGL_OBJECT object,
+VK_RESULT VKAPI vkBindObjectMemoryRange(
+ VK_OBJECT object,
uint32_t allocationIdx,
- XGL_GPU_SIZE rangeOffset,
- XGL_GPU_SIZE rangeSize,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset);
+ VK_GPU_SIZE rangeOffset,
+ VK_GPU_SIZE rangeSize,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset);
-XGL_RESULT XGLAPI xglBindImageMemoryRange(
- XGL_IMAGE image,
+VK_RESULT VKAPI vkBindImageMemoryRange(
+ VK_IMAGE image,
uint32_t allocationIdx,
- const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo,
- XGL_GPU_MEMORY mem,
- XGL_GPU_SIZE memOffset);
+ const VK_IMAGE_MEMORY_BIND_INFO* bindInfo,
+ VK_GPU_MEMORY mem,
+ VK_GPU_SIZE memOffset);
// Fence functions
-XGL_RESULT XGLAPI xglCreateFence(
- XGL_DEVICE device,
- const XGL_FENCE_CREATE_INFO* pCreateInfo,
- XGL_FENCE* pFence);
+VK_RESULT VKAPI vkCreateFence(
+ VK_DEVICE device,
+ const VK_FENCE_CREATE_INFO* pCreateInfo,
+ VK_FENCE* pFence);
-XGL_RESULT XGLAPI xglResetFences(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkResetFences(
+ VK_DEVICE device,
uint32_t fenceCount,
- XGL_FENCE* pFences);
+ VK_FENCE* pFences);
-XGL_RESULT XGLAPI xglGetFenceStatus(
- XGL_FENCE fence);
+VK_RESULT VKAPI vkGetFenceStatus(
+ VK_FENCE fence);
-XGL_RESULT XGLAPI xglWaitForFences(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkWaitForFences(
+ VK_DEVICE device,
uint32_t fenceCount,
- const XGL_FENCE* pFences,
+ const VK_FENCE* pFences,
bool32_t waitAll,
uint64_t timeout); // timeout in nanoseconds
// Queue semaphore functions
-XGL_RESULT XGLAPI xglCreateSemaphore(
- XGL_DEVICE device,
- const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo,
- XGL_SEMAPHORE* pSemaphore);
+VK_RESULT VKAPI vkCreateSemaphore(
+ VK_DEVICE device,
+ const VK_SEMAPHORE_CREATE_INFO* pCreateInfo,
+ VK_SEMAPHORE* pSemaphore);
-XGL_RESULT XGLAPI xglQueueSignalSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore);
+VK_RESULT VKAPI vkQueueSignalSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore);
-XGL_RESULT XGLAPI xglQueueWaitSemaphore(
- XGL_QUEUE queue,
- XGL_SEMAPHORE semaphore);
+VK_RESULT VKAPI vkQueueWaitSemaphore(
+ VK_QUEUE queue,
+ VK_SEMAPHORE semaphore);
// Event functions
-XGL_RESULT XGLAPI xglCreateEvent(
- XGL_DEVICE device,
- const XGL_EVENT_CREATE_INFO* pCreateInfo,
- XGL_EVENT* pEvent);
+VK_RESULT VKAPI vkCreateEvent(
+ VK_DEVICE device,
+ const VK_EVENT_CREATE_INFO* pCreateInfo,
+ VK_EVENT* pEvent);
-XGL_RESULT XGLAPI xglGetEventStatus(
- XGL_EVENT event);
+VK_RESULT VKAPI vkGetEventStatus(
+ VK_EVENT event);
-XGL_RESULT XGLAPI xglSetEvent(
- XGL_EVENT event);
+VK_RESULT VKAPI vkSetEvent(
+ VK_EVENT event);
-XGL_RESULT XGLAPI xglResetEvent(
- XGL_EVENT event);
+VK_RESULT VKAPI vkResetEvent(
+ VK_EVENT event);
// Query functions
-XGL_RESULT XGLAPI xglCreateQueryPool(
- XGL_DEVICE device,
- const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo,
- XGL_QUERY_POOL* pQueryPool);
+VK_RESULT VKAPI vkCreateQueryPool(
+ VK_DEVICE device,
+ const VK_QUERY_POOL_CREATE_INFO* pCreateInfo,
+ VK_QUERY_POOL* pQueryPool);
-XGL_RESULT XGLAPI xglGetQueryPoolResults(
- XGL_QUERY_POOL queryPool,
+VK_RESULT VKAPI vkGetQueryPoolResults(
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount,
size_t* pDataSize,
// Format capabilities
-XGL_RESULT XGLAPI xglGetFormatInfo(
- XGL_DEVICE device,
- XGL_FORMAT format,
- XGL_FORMAT_INFO_TYPE infoType,
+VK_RESULT VKAPI vkGetFormatInfo(
+ VK_DEVICE device,
+ VK_FORMAT format,
+ VK_FORMAT_INFO_TYPE infoType,
size_t* pDataSize,
void* pData);
// Buffer functions
-XGL_RESULT XGLAPI xglCreateBuffer(
- XGL_DEVICE device,
- const XGL_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_BUFFER* pBuffer);
+VK_RESULT VKAPI vkCreateBuffer(
+ VK_DEVICE device,
+ const VK_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_BUFFER* pBuffer);
// Buffer view functions
-XGL_RESULT XGLAPI xglCreateBufferView(
- XGL_DEVICE device,
- const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
- XGL_BUFFER_VIEW* pView);
+VK_RESULT VKAPI vkCreateBufferView(
+ VK_DEVICE device,
+ const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo,
+ VK_BUFFER_VIEW* pView);
// Image functions
-XGL_RESULT XGLAPI xglCreateImage(
- XGL_DEVICE device,
- const XGL_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage);
+VK_RESULT VKAPI vkCreateImage(
+ VK_DEVICE device,
+ const VK_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage);
-XGL_RESULT XGLAPI xglGetImageSubresourceInfo(
- XGL_IMAGE image,
- const XGL_IMAGE_SUBRESOURCE* pSubresource,
- XGL_SUBRESOURCE_INFO_TYPE infoType,
+VK_RESULT VKAPI vkGetImageSubresourceInfo(
+ VK_IMAGE image,
+ const VK_IMAGE_SUBRESOURCE* pSubresource,
+ VK_SUBRESOURCE_INFO_TYPE infoType,
size_t* pDataSize,
void* pData);
// Image view functions
-XGL_RESULT XGLAPI xglCreateImageView(
- XGL_DEVICE device,
- const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
- XGL_IMAGE_VIEW* pView);
+VK_RESULT VKAPI vkCreateImageView(
+ VK_DEVICE device,
+ const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo,
+ VK_IMAGE_VIEW* pView);
-XGL_RESULT XGLAPI xglCreateColorAttachmentView(
- XGL_DEVICE device,
- const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
- XGL_COLOR_ATTACHMENT_VIEW* pView);
+VK_RESULT VKAPI vkCreateColorAttachmentView(
+ VK_DEVICE device,
+ const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
+ VK_COLOR_ATTACHMENT_VIEW* pView);
-XGL_RESULT XGLAPI xglCreateDepthStencilView(
- XGL_DEVICE device,
- const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
- XGL_DEPTH_STENCIL_VIEW* pView);
+VK_RESULT VKAPI vkCreateDepthStencilView(
+ VK_DEVICE device,
+ const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo,
+ VK_DEPTH_STENCIL_VIEW* pView);
// Shader functions
-XGL_RESULT XGLAPI xglCreateShader(
- XGL_DEVICE device,
- const XGL_SHADER_CREATE_INFO* pCreateInfo,
- XGL_SHADER* pShader);
+VK_RESULT VKAPI vkCreateShader(
+ VK_DEVICE device,
+ const VK_SHADER_CREATE_INFO* pCreateInfo,
+ VK_SHADER* pShader);
// Pipeline functions
-XGL_RESULT XGLAPI xglCreateGraphicsPipeline(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline);
+VK_RESULT VKAPI vkCreateGraphicsPipeline(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline);
-XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline);
+VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline);
-XGL_RESULT XGLAPI xglCreateComputePipeline(
- XGL_DEVICE device,
- const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline);
+VK_RESULT VKAPI vkCreateComputePipeline(
+ VK_DEVICE device,
+ const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline);
-XGL_RESULT XGLAPI xglStorePipeline(
- XGL_PIPELINE pipeline,
+VK_RESULT VKAPI vkStorePipeline(
+ VK_PIPELINE pipeline,
size_t* pDataSize,
void* pData);
-XGL_RESULT XGLAPI xglLoadPipeline(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkLoadPipeline(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE* pPipeline);
+ VK_PIPELINE* pPipeline);
-XGL_RESULT XGLAPI xglLoadPipelineDerivative(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkLoadPipelineDerivative(
+ VK_DEVICE device,
size_t dataSize,
const void* pData,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline);
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline);
// Sampler functions
-XGL_RESULT XGLAPI xglCreateSampler(
- XGL_DEVICE device,
- const XGL_SAMPLER_CREATE_INFO* pCreateInfo,
- XGL_SAMPLER* pSampler);
+VK_RESULT VKAPI vkCreateSampler(
+ VK_DEVICE device,
+ const VK_SAMPLER_CREATE_INFO* pCreateInfo,
+ VK_SAMPLER* pSampler);
// Descriptor set functions
-XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(
- XGL_DEVICE device,
- const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout);
+VK_RESULT VKAPI vkCreateDescriptorSetLayout(
+ VK_DEVICE device,
+ const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_SET_LAYOUT* pSetLayout);
-XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(
+ VK_DEVICE device,
uint32_t setLayoutArrayCount,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain);
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain);
-XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_UPDATE_MODE updateMode);
+VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_UPDATE_MODE updateMode);
-XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(
- XGL_DEVICE device,
- XGL_CMD_BUFFER cmd);
+VK_RESULT VKAPI vkEndDescriptorPoolUpdate(
+ VK_DEVICE device,
+ VK_CMD_BUFFER cmd);
-XGL_RESULT XGLAPI xglCreateDescriptorPool(
- XGL_DEVICE device,
- XGL_DESCRIPTOR_POOL_USAGE poolUsage,
+VK_RESULT VKAPI vkCreateDescriptorPool(
+ VK_DEVICE device,
+ VK_DESCRIPTOR_POOL_USAGE poolUsage,
uint32_t maxSets,
- const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
- XGL_DESCRIPTOR_POOL* pDescriptorPool);
+ const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo,
+ VK_DESCRIPTOR_POOL* pDescriptorPool);
-XGL_RESULT XGLAPI xglResetDescriptorPool(
- XGL_DESCRIPTOR_POOL descriptorPool);
+VK_RESULT VKAPI vkResetDescriptorPool(
+ VK_DESCRIPTOR_POOL descriptorPool);
-XGL_RESULT XGLAPI xglAllocDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
- XGL_DESCRIPTOR_SET_USAGE setUsage,
+VK_RESULT VKAPI vkAllocDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
+ VK_DESCRIPTOR_SET_USAGE setUsage,
uint32_t count,
- const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
- XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts,
+ VK_DESCRIPTOR_SET* pDescriptorSets,
uint32_t* pCount);
-void XGLAPI xglClearDescriptorSets(
- XGL_DESCRIPTOR_POOL descriptorPool,
+void VKAPI vkClearDescriptorSets(
+ VK_DESCRIPTOR_POOL descriptorPool,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets);
+ const VK_DESCRIPTOR_SET* pDescriptorSets);
-void XGLAPI xglUpdateDescriptors(
- XGL_DESCRIPTOR_SET descriptorSet,
+void VKAPI vkUpdateDescriptors(
+ VK_DESCRIPTOR_SET descriptorSet,
uint32_t updateCount,
const void** ppUpdateArray);
// State object functions
-XGL_RESULT XGLAPI xglCreateDynamicViewportState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_VP_STATE_OBJECT* pState);
+VK_RESULT VKAPI vkCreateDynamicViewportState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_VP_STATE_OBJECT* pState);
-XGL_RESULT XGLAPI xglCreateDynamicRasterState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_RS_STATE_OBJECT* pState);
+VK_RESULT VKAPI vkCreateDynamicRasterState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_RS_STATE_OBJECT* pState);
-XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_CB_STATE_OBJECT* pState);
+VK_RESULT VKAPI vkCreateDynamicColorBlendState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_CB_STATE_OBJECT* pState);
-XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(
- XGL_DEVICE device,
- const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_DS_STATE_OBJECT* pState);
+VK_RESULT VKAPI vkCreateDynamicDepthStencilState(
+ VK_DEVICE device,
+ const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_DS_STATE_OBJECT* pState);
// Command buffer functions
-XGL_RESULT XGLAPI xglCreateCommandBuffer(
- XGL_DEVICE device,
- const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_CMD_BUFFER* pCmdBuffer);
+VK_RESULT VKAPI vkCreateCommandBuffer(
+ VK_DEVICE device,
+ const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_CMD_BUFFER* pCmdBuffer);
-XGL_RESULT XGLAPI xglBeginCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
+VK_RESULT VKAPI vkBeginCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo);
-XGL_RESULT XGLAPI xglEndCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer);
+VK_RESULT VKAPI vkEndCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer);
-XGL_RESULT XGLAPI xglResetCommandBuffer(
- XGL_CMD_BUFFER cmdBuffer);
+VK_RESULT VKAPI vkResetCommandBuffer(
+ VK_CMD_BUFFER cmdBuffer);
// Command buffer building functions
-void XGLAPI xglCmdBindPipeline(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_PIPELINE pipeline);
+void VKAPI vkCmdBindPipeline(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_PIPELINE pipeline);
-void XGLAPI xglCmdBindDynamicStateObject(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_STATE_BIND_POINT stateBindPoint,
- XGL_DYNAMIC_STATE_OBJECT dynamicState);
+void VKAPI vkCmdBindDynamicStateObject(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_STATE_BIND_POINT stateBindPoint,
+ VK_DYNAMIC_STATE_OBJECT dynamicState);
-void XGLAPI xglCmdBindDescriptorSets(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
+void VKAPI vkCmdBindDescriptorSets(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
uint32_t layoutChainSlot,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET* pDescriptorSets,
const uint32_t* pUserData);
-void XGLAPI xglCmdBindIndexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
- XGL_INDEX_TYPE indexType);
+void VKAPI vkCmdBindIndexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
+ VK_INDEX_TYPE indexType);
-void XGLAPI xglCmdBindVertexBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+void VKAPI vkCmdBindVertexBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t binding);
-void XGLAPI xglCmdDraw(
- XGL_CMD_BUFFER cmdBuffer,
+void VKAPI vkCmdDraw(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstVertex,
uint32_t vertexCount,
uint32_t firstInstance,
uint32_t instanceCount);
-void XGLAPI xglCmdDrawIndexed(
- XGL_CMD_BUFFER cmdBuffer,
+void VKAPI vkCmdDrawIndexed(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t firstIndex,
uint32_t indexCount,
int32_t vertexOffset,
uint32_t firstInstance,
uint32_t instanceCount);
-void XGLAPI xglCmdDrawIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+void VKAPI vkCmdDrawIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride);
-void XGLAPI xglCmdDrawIndexedIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset,
+void VKAPI vkCmdDrawIndexedIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset,
uint32_t count,
uint32_t stride);
-void XGLAPI xglCmdDispatch(
- XGL_CMD_BUFFER cmdBuffer,
+void VKAPI vkCmdDispatch(
+ VK_CMD_BUFFER cmdBuffer,
uint32_t x,
uint32_t y,
uint32_t z);
-void XGLAPI xglCmdDispatchIndirect(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER buffer,
- XGL_GPU_SIZE offset);
+void VKAPI vkCmdDispatchIndirect(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER buffer,
+ VK_GPU_SIZE offset);
-void XGLAPI xglCmdCopyBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_BUFFER destBuffer,
+void VKAPI vkCmdCopyBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_COPY* pRegions);
-
-void XGLAPI xglCmdCopyImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ const VK_BUFFER_COPY* pRegions);
+
+void VKAPI vkCmdCopyImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_COPY* pRegions);
-
-void XGLAPI xglCmdBlitImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ const VK_IMAGE_COPY* pRegions);
+
+void VKAPI vkCmdBlitImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_IMAGE_BLIT* pRegions);
+ const VK_IMAGE_BLIT* pRegions);
-void XGLAPI xglCmdCopyBufferToImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+void VKAPI vkCmdCopyBufferToImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions);
+ const VK_BUFFER_IMAGE_COPY* pRegions);
-void XGLAPI xglCmdCopyImageToBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
+void VKAPI vkCmdCopyImageToBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
uint32_t regionCount,
- const XGL_BUFFER_IMAGE_COPY* pRegions);
-
-void XGLAPI xglCmdCloneImageData(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout);
-
-void XGLAPI xglCmdUpdateBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE dataSize,
+ const VK_BUFFER_IMAGE_COPY* pRegions);
+
+void VKAPI vkCmdCloneImageData(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout);
+
+void VKAPI vkCmdUpdateBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE dataSize,
const uint32_t* pData);
-void XGLAPI xglCmdFillBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset,
- XGL_GPU_SIZE fillSize,
+void VKAPI vkCmdFillBuffer(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset,
+ VK_GPU_SIZE fillSize,
uint32_t data);
-void XGLAPI xglCmdClearColorImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR color,
+void VKAPI vkCmdClearColorImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR color,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges);
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges);
-void XGLAPI xglCmdClearDepthStencil(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image,
- XGL_IMAGE_LAYOUT imageLayout,
+void VKAPI vkCmdClearDepthStencil(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image,
+ VK_IMAGE_LAYOUT imageLayout,
float depth,
uint32_t stencil,
uint32_t rangeCount,
- const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges);
-
-void XGLAPI xglCmdResolveImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage,
- XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage,
- XGL_IMAGE_LAYOUT destImageLayout,
+ const VK_IMAGE_SUBRESOURCE_RANGE* pRanges);
+
+void VKAPI vkCmdResolveImage(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
uint32_t rectCount,
- const XGL_IMAGE_RESOLVE* pRects);
+ const VK_IMAGE_RESOLVE* pRects);
-void XGLAPI xglCmdSetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event,
- XGL_PIPE_EVENT pipeEvent);
+void VKAPI vkCmdSetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event,
+ VK_PIPE_EVENT pipeEvent);
-void XGLAPI xglCmdResetEvent(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_EVENT event,
- XGL_PIPE_EVENT pipeEvent);
+void VKAPI vkCmdResetEvent(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_EVENT event,
+ VK_PIPE_EVENT pipeEvent);
-void XGLAPI xglCmdWaitEvents(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_EVENT_WAIT_INFO* pWaitInfo);
+void VKAPI vkCmdWaitEvents(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_EVENT_WAIT_INFO* pWaitInfo);
-void XGLAPI xglCmdPipelineBarrier(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_PIPELINE_BARRIER* pBarrier);
+void VKAPI vkCmdPipelineBarrier(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_PIPELINE_BARRIER* pBarrier);
-void XGLAPI xglCmdBeginQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+void VKAPI vkCmdBeginQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot,
- XGL_FLAGS flags);
+ VK_FLAGS flags);
-void XGLAPI xglCmdEndQuery(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+void VKAPI vkCmdEndQuery(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t slot);
-void XGLAPI xglCmdResetQueryPool(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_QUERY_POOL queryPool,
+void VKAPI vkCmdResetQueryPool(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_QUERY_POOL queryPool,
uint32_t startQuery,
uint32_t queryCount);
-void XGLAPI xglCmdWriteTimestamp(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_TIMESTAMP_TYPE timestampType,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset);
+void VKAPI vkCmdWriteTimestamp(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_TIMESTAMP_TYPE timestampType,
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset);
-void XGLAPI xglCmdInitAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+void VKAPI vkCmdInitAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
const uint32_t* pData);
-void XGLAPI xglCmdLoadAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+void VKAPI vkCmdLoadAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER srcBuffer,
- XGL_GPU_SIZE srcOffset);
+ VK_BUFFER srcBuffer,
+ VK_GPU_SIZE srcOffset);
-void XGLAPI xglCmdSaveAtomicCounters(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
+void VKAPI vkCmdSaveAtomicCounters(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
uint32_t startCounter,
uint32_t counterCount,
- XGL_BUFFER destBuffer,
- XGL_GPU_SIZE destOffset);
+ VK_BUFFER destBuffer,
+ VK_GPU_SIZE destOffset);
-XGL_RESULT XGLAPI xglCreateFramebuffer(
- XGL_DEVICE device,
- const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo,
- XGL_FRAMEBUFFER* pFramebuffer);
+VK_RESULT VKAPI vkCreateFramebuffer(
+ VK_DEVICE device,
+ const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo,
+ VK_FRAMEBUFFER* pFramebuffer);
-XGL_RESULT XGLAPI xglCreateRenderPass(
- XGL_DEVICE device,
- const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo,
- XGL_RENDER_PASS* pRenderPass);
+VK_RESULT VKAPI vkCreateRenderPass(
+ VK_DEVICE device,
+ const VK_RENDER_PASS_CREATE_INFO* pCreateInfo,
+ VK_RENDER_PASS* pRenderPass);
-void XGLAPI xglCmdBeginRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- const XGL_RENDER_PASS_BEGIN* pRenderPassBegin);
+void VKAPI vkCmdBeginRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ const VK_RENDER_PASS_BEGIN* pRenderPassBegin);
-void XGLAPI xglCmdEndRenderPass(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_RENDER_PASS renderPass);
+void VKAPI vkCmdEndRenderPass(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_RENDER_PASS renderPass);
-#endif // XGL_PROTOTYPES
+#endif // VK_PROTOTYPES
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
-#endif // __XGL_H__
+#endif // __VULKAN_H__
/******************************************************************************************
To incorporate trasnform feedback, we could create a new pipeline stage. This would
be injected into a PSO by including the following in the chain:
- typedef struct _XGL_XFB_CREATE_INFO
+ typedef struct _VK_XFB_CREATE_INFO
{
- XGL_STRUCTURE_TYPE sType; // Must be XGL_STRUCTURE_TYPE_PIPELINE_XFB_CREATE_INFO
+ VK_STRUCTURE_TYPE sType; // Must be VK_STRUCTURE_TYPE_PIPELINE_XFB_CREATE_INFO
const void* pNext; // Pointer to next structure
// More XFB state, if any goes here
- } XGL_DEPTH_STENCIL_VIEW_CREATE_INFO;
+ } VK_DEPTH_STENCIL_VIEW_CREATE_INFO;
We expect that only the shader-side configuration (via layout qualifiers or their IR
equivalent) is used to configure the data written to each stream. When transform
feedback is part of the pipeline, transform feedback binding would be available
through a new API bind point:
- xglCmdBindTransformFeedbackMemoryView(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint, // = GRAPHICS
+ vkCmdBindTransformFeedbackMemoryView(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint, // = GRAPHICS
uint32_t index,
- const XGL_MEMORY_VIEW_ATTACH_INFO* pMemView);
+ const VK_MEMORY_VIEW_ATTACH_INFO* pMemView);
2) "Bindless" + support for non-bindless hardware.
- XGL doesn't have bindless textures the way that GL does. It has resource descriptor
+ VK doesn't have bindless textures the way that GL does. It has resource descriptor
sets, or resource tables. Resource tables can be nested and hold references to more
resource tables. They are explicitly sized by the application and have no artificial
upper size limit. An application can still attach as many textures as they want to
-#ifndef __XGLDBG_H__
-#define __XGLDBG_H__
+#ifndef __VKDBG_H__
+#define __VKDBG_H__
-#include <xgl.h>
+#include <vulkan.h>
#ifdef __cplusplus
extern "C"
{
#endif // __cplusplus
-typedef enum _XGL_DBG_MSG_TYPE
+typedef enum _VK_DBG_MSG_TYPE
{
- XGL_DBG_MSG_UNKNOWN = 0x0,
- XGL_DBG_MSG_ERROR = 0x1,
- XGL_DBG_MSG_WARNING = 0x2,
- XGL_DBG_MSG_PERF_WARNING = 0x3,
+ VK_DBG_MSG_UNKNOWN = 0x0,
+ VK_DBG_MSG_ERROR = 0x1,
+ VK_DBG_MSG_WARNING = 0x2,
+ VK_DBG_MSG_PERF_WARNING = 0x3,
- XGL_DBG_MSG_TYPE_BEGIN_RANGE = XGL_DBG_MSG_UNKNOWN,
- XGL_DBG_MSG_TYPE_END_RANGE = XGL_DBG_MSG_PERF_WARNING,
- XGL_NUM_DBG_MSG_TYPE = (XGL_DBG_MSG_TYPE_END_RANGE - XGL_DBG_MSG_TYPE_BEGIN_RANGE + 1),
-} XGL_DBG_MSG_TYPE;
+ VK_DBG_MSG_TYPE_BEGIN_RANGE = VK_DBG_MSG_UNKNOWN,
+ VK_DBG_MSG_TYPE_END_RANGE = VK_DBG_MSG_PERF_WARNING,
+ VK_NUM_DBG_MSG_TYPE = (VK_DBG_MSG_TYPE_END_RANGE - VK_DBG_MSG_TYPE_BEGIN_RANGE + 1),
+} VK_DBG_MSG_TYPE;
-typedef enum _XGL_DBG_MSG_FILTER
+typedef enum _VK_DBG_MSG_FILTER
{
- XGL_DBG_MSG_FILTER_NONE = 0x0,
- XGL_DBG_MSG_FILTER_REPEATED = 0x1,
- XGL_DBG_MSG_FILTER_ALL = 0x2,
+ VK_DBG_MSG_FILTER_NONE = 0x0,
+ VK_DBG_MSG_FILTER_REPEATED = 0x1,
+ VK_DBG_MSG_FILTER_ALL = 0x2,
- XGL_DBG_MSG_FILTER_BEGIN_RANGE = XGL_DBG_MSG_FILTER_NONE,
- XGL_DBG_MSG_FILTER_END_RANGE = XGL_DBG_MSG_FILTER_ALL,
- XGL_NUM_DBG_MSG_FILTER = (XGL_DBG_MSG_FILTER_END_RANGE - XGL_DBG_MSG_FILTER_BEGIN_RANGE + 1),
-} XGL_DBG_MSG_FILTER;
+ VK_DBG_MSG_FILTER_BEGIN_RANGE = VK_DBG_MSG_FILTER_NONE,
+ VK_DBG_MSG_FILTER_END_RANGE = VK_DBG_MSG_FILTER_ALL,
+ VK_NUM_DBG_MSG_FILTER = (VK_DBG_MSG_FILTER_END_RANGE - VK_DBG_MSG_FILTER_BEGIN_RANGE + 1),
+} VK_DBG_MSG_FILTER;
-typedef enum _XGL_DBG_GLOBAL_OPTION
+typedef enum _VK_DBG_GLOBAL_OPTION
{
- XGL_DBG_OPTION_DEBUG_ECHO_ENABLE = 0x0,
- XGL_DBG_OPTION_BREAK_ON_ERROR = 0x1,
- XGL_DBG_OPTION_BREAK_ON_WARNING = 0x2,
+ VK_DBG_OPTION_DEBUG_ECHO_ENABLE = 0x0,
+ VK_DBG_OPTION_BREAK_ON_ERROR = 0x1,
+ VK_DBG_OPTION_BREAK_ON_WARNING = 0x2,
- XGL_DBG_GLOBAL_OPTION_BEGIN_RANGE = XGL_DBG_OPTION_DEBUG_ECHO_ENABLE,
- XGL_DBG_GLOBAL_OPTION_END_RANGE = XGL_DBG_OPTION_BREAK_ON_WARNING,
- XGL_NUM_DBG_GLOBAL_OPTION = (XGL_DBG_GLOBAL_OPTION_END_RANGE - XGL_DBG_GLOBAL_OPTION_BEGIN_RANGE + 1),
-} XGL_DBG_GLOBAL_OPTION;
+ VK_DBG_GLOBAL_OPTION_BEGIN_RANGE = VK_DBG_OPTION_DEBUG_ECHO_ENABLE,
+ VK_DBG_GLOBAL_OPTION_END_RANGE = VK_DBG_OPTION_BREAK_ON_WARNING,
+ VK_NUM_DBG_GLOBAL_OPTION = (VK_DBG_GLOBAL_OPTION_END_RANGE - VK_DBG_GLOBAL_OPTION_BEGIN_RANGE + 1),
+} VK_DBG_GLOBAL_OPTION;
-typedef enum _XGL_DBG_DEVICE_OPTION
+typedef enum _VK_DBG_DEVICE_OPTION
{
- XGL_DBG_OPTION_DISABLE_PIPELINE_LOADS = 0x0,
- XGL_DBG_OPTION_FORCE_OBJECT_MEMORY_REQS = 0x1,
- XGL_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT = 0x2,
+ VK_DBG_OPTION_DISABLE_PIPELINE_LOADS = 0x0,
+ VK_DBG_OPTION_FORCE_OBJECT_MEMORY_REQS = 0x1,
+ VK_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT = 0x2,
- XGL_DBG_DEVICE_OPTION_BEGIN_RANGE = XGL_DBG_OPTION_DISABLE_PIPELINE_LOADS,
- XGL_DBG_DEVICE_OPTION_END_RANGE = XGL_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT,
- XGL_NUM_DBG_DEVICE_OPTION = (XGL_DBG_DEVICE_OPTION_END_RANGE - XGL_DBG_DEVICE_OPTION_BEGIN_RANGE + 1),
-} XGL_DBG_DEVICE_OPTION;
+ VK_DBG_DEVICE_OPTION_BEGIN_RANGE = VK_DBG_OPTION_DISABLE_PIPELINE_LOADS,
+ VK_DBG_DEVICE_OPTION_END_RANGE = VK_DBG_OPTION_FORCE_LARGE_IMAGE_ALIGNMENT,
+ VK_NUM_DBG_DEVICE_OPTION = (VK_DBG_DEVICE_OPTION_END_RANGE - VK_DBG_DEVICE_OPTION_BEGIN_RANGE + 1),
+} VK_DBG_DEVICE_OPTION;
-typedef enum _XGL_DBG_OBJECT_TYPE
+typedef enum _VK_DBG_OBJECT_TYPE
{
- XGL_DBG_OBJECT_UNKNOWN = 0x00,
- XGL_DBG_OBJECT_DEVICE = 0x01,
- XGL_DBG_OBJECT_QUEUE = 0x02,
- XGL_DBG_OBJECT_GPU_MEMORY = 0x03,
- XGL_DBG_OBJECT_IMAGE = 0x04,
- XGL_DBG_OBJECT_IMAGE_VIEW = 0x05,
- XGL_DBG_OBJECT_COLOR_TARGET_VIEW = 0x06,
- XGL_DBG_OBJECT_DEPTH_STENCIL_VIEW = 0x07,
- XGL_DBG_OBJECT_SHADER = 0x08,
- XGL_DBG_OBJECT_GRAPHICS_PIPELINE = 0x09,
- XGL_DBG_OBJECT_COMPUTE_PIPELINE = 0x0a,
- XGL_DBG_OBJECT_SAMPLER = 0x0b,
- XGL_DBG_OBJECT_DESCRIPTOR_SET = 0x0c,
- XGL_DBG_OBJECT_VIEWPORT_STATE = 0x0d,
- XGL_DBG_OBJECT_RASTER_STATE = 0x0e,
- XGL_DBG_OBJECT_MSAA_STATE = 0x0f,
- XGL_DBG_OBJECT_COLOR_BLEND_STATE = 0x10,
- XGL_DBG_OBJECT_DEPTH_STENCIL_STATE = 0x11,
- XGL_DBG_OBJECT_CMD_BUFFER = 0x12,
- XGL_DBG_OBJECT_FENCE = 0x13,
- XGL_DBG_OBJECT_SEMAPHORE = 0x14,
- XGL_DBG_OBJECT_EVENT = 0x15,
- XGL_DBG_OBJECT_QUERY_POOL = 0x16,
- XGL_DBG_OBJECT_SHARED_GPU_MEMORY = 0x17,
- XGL_DBG_OBJECT_SHARED_SEMAPHORE = 0x18,
- XGL_DBG_OBJECT_PEER_GPU_MEMORY = 0x19,
- XGL_DBG_OBJECT_PEER_IMAGE = 0x1a,
- XGL_DBG_OBJECT_PINNED_GPU_MEMORY = 0x1b,
- XGL_DBG_OBJECT_INTERNAL_GPU_MEMORY = 0x1c,
- XGL_DBG_OBJECT_FRAMEBUFFER = 0x1d,
- XGL_DBG_OBJECT_RENDER_PASS = 0x1e,
-
- XGL_DBG_OBJECT_INSTANCE,
- XGL_DBG_OBJECT_BUFFER,
- XGL_DBG_OBJECT_BUFFER_VIEW,
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT,
- XGL_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN,
- XGL_DBG_OBJECT_DESCRIPTOR_POOL,
-
- XGL_DBG_OBJECT_TYPE_BEGIN_RANGE = XGL_DBG_OBJECT_UNKNOWN,
- XGL_DBG_OBJECT_TYPE_END_RANGE = XGL_DBG_OBJECT_DESCRIPTOR_POOL,
- XGL_NUM_DBG_OBJECT_TYPE = (XGL_DBG_OBJECT_TYPE_END_RANGE - XGL_DBG_OBJECT_TYPE_BEGIN_RANGE + 1),
-} XGL_DBG_OBJECT_TYPE;
-
-typedef void (XGLAPI *XGL_DBG_MSG_CALLBACK_FUNCTION)(
- XGL_DBG_MSG_TYPE msgType,
- XGL_VALIDATION_LEVEL validationLevel,
- XGL_BASE_OBJECT srcObject,
+ VK_DBG_OBJECT_UNKNOWN = 0x00,
+ VK_DBG_OBJECT_DEVICE = 0x01,
+ VK_DBG_OBJECT_QUEUE = 0x02,
+ VK_DBG_OBJECT_GPU_MEMORY = 0x03,
+ VK_DBG_OBJECT_IMAGE = 0x04,
+ VK_DBG_OBJECT_IMAGE_VIEW = 0x05,
+ VK_DBG_OBJECT_COLOR_TARGET_VIEW = 0x06,
+ VK_DBG_OBJECT_DEPTH_STENCIL_VIEW = 0x07,
+ VK_DBG_OBJECT_SHADER = 0x08,
+ VK_DBG_OBJECT_GRAPHICS_PIPELINE = 0x09,
+ VK_DBG_OBJECT_COMPUTE_PIPELINE = 0x0a,
+ VK_DBG_OBJECT_SAMPLER = 0x0b,
+ VK_DBG_OBJECT_DESCRIPTOR_SET = 0x0c,
+ VK_DBG_OBJECT_VIEWPORT_STATE = 0x0d,
+ VK_DBG_OBJECT_RASTER_STATE = 0x0e,
+ VK_DBG_OBJECT_MSAA_STATE = 0x0f,
+ VK_DBG_OBJECT_COLOR_BLEND_STATE = 0x10,
+ VK_DBG_OBJECT_DEPTH_STENCIL_STATE = 0x11,
+ VK_DBG_OBJECT_CMD_BUFFER = 0x12,
+ VK_DBG_OBJECT_FENCE = 0x13,
+ VK_DBG_OBJECT_SEMAPHORE = 0x14,
+ VK_DBG_OBJECT_EVENT = 0x15,
+ VK_DBG_OBJECT_QUERY_POOL = 0x16,
+ VK_DBG_OBJECT_SHARED_GPU_MEMORY = 0x17,
+ VK_DBG_OBJECT_SHARED_SEMAPHORE = 0x18,
+ VK_DBG_OBJECT_PEER_GPU_MEMORY = 0x19,
+ VK_DBG_OBJECT_PEER_IMAGE = 0x1a,
+ VK_DBG_OBJECT_PINNED_GPU_MEMORY = 0x1b,
+ VK_DBG_OBJECT_INTERNAL_GPU_MEMORY = 0x1c,
+ VK_DBG_OBJECT_FRAMEBUFFER = 0x1d,
+ VK_DBG_OBJECT_RENDER_PASS = 0x1e,
+
+ VK_DBG_OBJECT_INSTANCE,
+ VK_DBG_OBJECT_BUFFER,
+ VK_DBG_OBJECT_BUFFER_VIEW,
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT,
+ VK_DBG_OBJECT_DESCRIPTOR_SET_LAYOUT_CHAIN,
+ VK_DBG_OBJECT_DESCRIPTOR_POOL,
+
+ VK_DBG_OBJECT_TYPE_BEGIN_RANGE = VK_DBG_OBJECT_UNKNOWN,
+ VK_DBG_OBJECT_TYPE_END_RANGE = VK_DBG_OBJECT_DESCRIPTOR_POOL,
+ VK_NUM_DBG_OBJECT_TYPE = (VK_DBG_OBJECT_TYPE_END_RANGE - VK_DBG_OBJECT_TYPE_BEGIN_RANGE + 1),
+} VK_DBG_OBJECT_TYPE;
+
+typedef void (VKAPI *VK_DBG_MSG_CALLBACK_FUNCTION)(
+ VK_DBG_MSG_TYPE msgType,
+ VK_VALIDATION_LEVEL validationLevel,
+ VK_BASE_OBJECT srcObject,
size_t location,
int32_t msgCode,
const char* pMsg,
void* pUserData);
// Debug functions
-typedef XGL_RESULT (XGLAPI *xglDbgSetValidationLevelType)(XGL_DEVICE device, XGL_VALIDATION_LEVEL validationLevel);
-typedef XGL_RESULT (XGLAPI *xglDbgRegisterMsgCallbackType)(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData);
-typedef XGL_RESULT (XGLAPI *xglDbgUnregisterMsgCallbackType)(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback);
-typedef XGL_RESULT (XGLAPI *xglDbgSetMessageFilterType)(XGL_DEVICE device, int32_t msgCode, XGL_DBG_MSG_FILTER filter);
-typedef XGL_RESULT (XGLAPI *xglDbgSetObjectTagType)(XGL_BASE_OBJECT object, size_t tagSize, const void* pTag);
-typedef XGL_RESULT (XGLAPI *xglDbgSetGlobalOptionType)(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData);
-typedef XGL_RESULT (XGLAPI *xglDbgSetDeviceOptionType)(XGL_DEVICE device, XGL_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData);
-typedef void (XGLAPI *xglCmdDbgMarkerBeginType)(XGL_CMD_BUFFER cmdBuffer, const char* pMarker);
-typedef void (XGLAPI *xglCmdDbgMarkerEndType)(XGL_CMD_BUFFER cmdBuffer);
-
-#ifdef XGL_PROTOTYPES
-XGL_RESULT XGLAPI xglDbgSetValidationLevel(
- XGL_DEVICE device,
- XGL_VALIDATION_LEVEL validationLevel);
-
-XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(
- XGL_INSTANCE instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
+typedef VK_RESULT (VKAPI *vkDbgSetValidationLevelType)(VK_DEVICE device, VK_VALIDATION_LEVEL validationLevel);
+typedef VK_RESULT (VKAPI *vkDbgRegisterMsgCallbackType)(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData);
+typedef VK_RESULT (VKAPI *vkDbgUnregisterMsgCallbackType)(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback);
+typedef VK_RESULT (VKAPI *vkDbgSetMessageFilterType)(VK_DEVICE device, int32_t msgCode, VK_DBG_MSG_FILTER filter);
+typedef VK_RESULT (VKAPI *vkDbgSetObjectTagType)(VK_BASE_OBJECT object, size_t tagSize, const void* pTag);
+typedef VK_RESULT (VKAPI *vkDbgSetGlobalOptionType)(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData);
+typedef VK_RESULT (VKAPI *vkDbgSetDeviceOptionType)(VK_DEVICE device, VK_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData);
+typedef void (VKAPI *vkCmdDbgMarkerBeginType)(VK_CMD_BUFFER cmdBuffer, const char* pMarker);
+typedef void (VKAPI *vkCmdDbgMarkerEndType)(VK_CMD_BUFFER cmdBuffer);
+
+#ifdef VK_PROTOTYPES
+VK_RESULT VKAPI vkDbgSetValidationLevel(
+ VK_DEVICE device,
+ VK_VALIDATION_LEVEL validationLevel);
+
+VK_RESULT VKAPI vkDbgRegisterMsgCallback(
+ VK_INSTANCE instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback,
void* pUserData);
-XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(
- XGL_INSTANCE instance,
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback);
+VK_RESULT VKAPI vkDbgUnregisterMsgCallback(
+ VK_INSTANCE instance,
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback);
-XGL_RESULT XGLAPI xglDbgSetMessageFilter(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkDbgSetMessageFilter(
+ VK_DEVICE device,
int32_t msgCode,
- XGL_DBG_MSG_FILTER filter);
+ VK_DBG_MSG_FILTER filter);
-XGL_RESULT XGLAPI xglDbgSetObjectTag(
- XGL_BASE_OBJECT object,
+VK_RESULT VKAPI vkDbgSetObjectTag(
+ VK_BASE_OBJECT object,
size_t tagSize,
const void* pTag);
-XGL_RESULT XGLAPI xglDbgSetGlobalOption(
- XGL_INSTANCE instance,
- XGL_DBG_GLOBAL_OPTION dbgOption,
+VK_RESULT VKAPI vkDbgSetGlobalOption(
+ VK_INSTANCE instance,
+ VK_DBG_GLOBAL_OPTION dbgOption,
size_t dataSize,
const void* pData);
-XGL_RESULT XGLAPI xglDbgSetDeviceOption(
- XGL_DEVICE device,
- XGL_DBG_DEVICE_OPTION dbgOption,
+VK_RESULT VKAPI vkDbgSetDeviceOption(
+ VK_DEVICE device,
+ VK_DBG_DEVICE_OPTION dbgOption,
size_t dataSize,
const void* pData);
-void XGLAPI xglCmdDbgMarkerBegin(
- XGL_CMD_BUFFER cmdBuffer,
+void VKAPI vkCmdDbgMarkerBegin(
+ VK_CMD_BUFFER cmdBuffer,
const char* pMarker);
-void XGLAPI xglCmdDbgMarkerEnd(
- XGL_CMD_BUFFER cmdBuffer);
+void VKAPI vkCmdDbgMarkerEnd(
+ VK_CMD_BUFFER cmdBuffer);
-#endif // XGL_PROTOTYPES
+#endif // VK_PROTOTYPES
#ifdef __cplusplus
}; // extern "C"
#endif // __cplusplus
-#endif // __XGLDBG_H__
+#endif // __VKDBG_H__
-#ifndef XGLICD_H
-#define XGLICD_H
+#ifndef VKICD_H
+#define VKICD_H
#include <stdint.h>
#include <stdbool.h>
-#include "xglPlatform.h"
+#include "vkPlatform.h"
/*
* The ICD must reserve space for a pointer for the loader's dispatch
#define ICD_LOADER_MAGIC 0x01CDC0DE
-typedef union _XGL_LOADER_DATA {
+typedef union _VK_LOADER_DATA {
uint32_t loaderMagic;
void *loaderData;
-} XGL_LOADER_DATA;
+} VK_LOADER_DATA;
static inline void set_loader_magic_value(void *pNewObject) {
- XGL_LOADER_DATA *loader_info = (XGL_LOADER_DATA *) pNewObject;
+ VK_LOADER_DATA *loader_info = (VK_LOADER_DATA *) pNewObject;
loader_info->loaderMagic = ICD_LOADER_MAGIC;
}
static inline bool valid_loader_magic_value(void *pNewObject) {
- const XGL_LOADER_DATA *loader_info = (XGL_LOADER_DATA *) pNewObject;
+ const VK_LOADER_DATA *loader_info = (VK_LOADER_DATA *) pNewObject;
return loader_info->loaderMagic == ICD_LOADER_MAGIC;
}
-#endif // XGLICD_H
+#endif // VKICD_H
*/
#pragma once
-#include "xgl.h"
-#include "xglDbg.h"
+#include "vulkan.h"
+#include "vkDbg.h"
#if defined(__linux__) || defined(XCB_NVIDIA)
-#include "xglWsiX11Ext.h"
+#include "vkWsiX11Ext.h"
#endif
#if defined(__GNUC__) && __GNUC__ >= 4
-# define XGL_LAYER_EXPORT __attribute__((visibility("default")))
+# define VK_LAYER_EXPORT __attribute__((visibility("default")))
#elif defined(__SUNPRO_C) && (__SUNPRO_C >= 0x590)
-# define XGL_LAYER_EXPORT __attribute__((visibility("default")))
+# define VK_LAYER_EXPORT __attribute__((visibility("default")))
#else
-# define XGL_LAYER_EXPORT
+# define VK_LAYER_EXPORT
#endif
-typedef struct _XGL_BASE_LAYER_OBJECT
+typedef struct _VK_BASE_LAYER_OBJECT
{
- xglGetProcAddrType pGPA;
- XGL_BASE_OBJECT nextObject;
- XGL_BASE_OBJECT baseObject;
-} XGL_BASE_LAYER_OBJECT;
+ vkGetProcAddrType pGPA;
+ VK_BASE_OBJECT nextObject;
+ VK_BASE_OBJECT baseObject;
+} VK_BASE_LAYER_OBJECT;
-typedef struct _XGL_LAYER_DISPATCH_TABLE
+typedef struct _VK_LAYER_DISPATCH_TABLE
{
- xglGetProcAddrType GetProcAddr;
- xglCreateInstanceType CreateInstance;
- xglDestroyInstanceType DestroyInstance;
- xglEnumerateGpusType EnumerateGpus;
- xglGetGpuInfoType GetGpuInfo;
- xglCreateDeviceType CreateDevice;
- xglDestroyDeviceType DestroyDevice;
- xglGetExtensionSupportType GetExtensionSupport;
- xglEnumerateLayersType EnumerateLayers;
- xglGetDeviceQueueType GetDeviceQueue;
- xglQueueSubmitType QueueSubmit;
- xglQueueAddMemReferenceType QueueAddMemReference;
- xglQueueRemoveMemReferenceType QueueRemoveMemReference;
- xglQueueWaitIdleType QueueWaitIdle;
- xglDeviceWaitIdleType DeviceWaitIdle;
- xglAllocMemoryType AllocMemory;
- xglFreeMemoryType FreeMemory;
- xglSetMemoryPriorityType SetMemoryPriority;
- xglMapMemoryType MapMemory;
- xglUnmapMemoryType UnmapMemory;
- xglPinSystemMemoryType PinSystemMemory;
- xglGetMultiGpuCompatibilityType GetMultiGpuCompatibility;
- xglOpenSharedMemoryType OpenSharedMemory;
- xglOpenSharedSemaphoreType OpenSharedSemaphore;
- xglOpenPeerMemoryType OpenPeerMemory;
- xglOpenPeerImageType OpenPeerImage;
- xglDestroyObjectType DestroyObject;
- xglGetObjectInfoType GetObjectInfo;
- xglBindObjectMemoryType BindObjectMemory;
- xglBindObjectMemoryRangeType BindObjectMemoryRange;
- xglBindImageMemoryRangeType BindImageMemoryRange;
- xglCreateFenceType CreateFence;
- xglGetFenceStatusType GetFenceStatus;
- xglResetFencesType ResetFences;
- xglWaitForFencesType WaitForFences;
- xglCreateSemaphoreType CreateSemaphore;
- xglQueueSignalSemaphoreType QueueSignalSemaphore;
- xglQueueWaitSemaphoreType QueueWaitSemaphore;
- xglCreateEventType CreateEvent;
- xglGetEventStatusType GetEventStatus;
- xglSetEventType SetEvent;
- xglResetEventType ResetEvent;
- xglCreateQueryPoolType CreateQueryPool;
- xglGetQueryPoolResultsType GetQueryPoolResults;
- xglGetFormatInfoType GetFormatInfo;
- xglCreateBufferType CreateBuffer;
- xglCreateBufferViewType CreateBufferView;
- xglCreateImageType CreateImage;
- xglGetImageSubresourceInfoType GetImageSubresourceInfo;
- xglCreateImageViewType CreateImageView;
- xglCreateColorAttachmentViewType CreateColorAttachmentView;
- xglCreateDepthStencilViewType CreateDepthStencilView;
- xglCreateShaderType CreateShader;
- xglCreateGraphicsPipelineType CreateGraphicsPipeline;
- xglCreateGraphicsPipelineDerivativeType CreateGraphicsPipelineDerivative;
- xglCreateComputePipelineType CreateComputePipeline;
- xglStorePipelineType StorePipeline;
- xglLoadPipelineType LoadPipeline;
- xglLoadPipelineDerivativeType LoadPipelineDerivative;
- xglCreateSamplerType CreateSampler;
- xglCreateDescriptorSetLayoutType CreateDescriptorSetLayout;
- xglCreateDescriptorSetLayoutChainType CreateDescriptorSetLayoutChain;
- xglBeginDescriptorPoolUpdateType BeginDescriptorPoolUpdate;
- xglEndDescriptorPoolUpdateType EndDescriptorPoolUpdate;
- xglCreateDescriptorPoolType CreateDescriptorPool;
- xglResetDescriptorPoolType ResetDescriptorPool;
- xglAllocDescriptorSetsType AllocDescriptorSets;
- xglClearDescriptorSetsType ClearDescriptorSets;
- xglUpdateDescriptorsType UpdateDescriptors;
- xglCreateDynamicViewportStateType CreateDynamicViewportState;
- xglCreateDynamicRasterStateType CreateDynamicRasterState;
- xglCreateDynamicColorBlendStateType CreateDynamicColorBlendState;
- xglCreateDynamicDepthStencilStateType CreateDynamicDepthStencilState;
- xglCreateCommandBufferType CreateCommandBuffer;
- xglBeginCommandBufferType BeginCommandBuffer;
- xglEndCommandBufferType EndCommandBuffer;
- xglResetCommandBufferType ResetCommandBuffer;
- xglCmdBindPipelineType CmdBindPipeline;
- xglCmdBindDynamicStateObjectType CmdBindDynamicStateObject;
- xglCmdBindDescriptorSetsType CmdBindDescriptorSets;
- xglCmdBindVertexBufferType CmdBindVertexBuffer;
- xglCmdBindIndexBufferType CmdBindIndexBuffer;
- xglCmdDrawType CmdDraw;
- xglCmdDrawIndexedType CmdDrawIndexed;
- xglCmdDrawIndirectType CmdDrawIndirect;
- xglCmdDrawIndexedIndirectType CmdDrawIndexedIndirect;
- xglCmdDispatchType CmdDispatch;
- xglCmdDispatchIndirectType CmdDispatchIndirect;
- xglCmdCopyBufferType CmdCopyBuffer;
- xglCmdCopyImageType CmdCopyImage;
- xglCmdBlitImageType CmdBlitImage;
- xglCmdCopyBufferToImageType CmdCopyBufferToImage;
- xglCmdCopyImageToBufferType CmdCopyImageToBuffer;
- xglCmdCloneImageDataType CmdCloneImageData;
- xglCmdUpdateBufferType CmdUpdateBuffer;
- xglCmdFillBufferType CmdFillBuffer;
- xglCmdClearColorImageType CmdClearColorImage;
- xglCmdClearDepthStencilType CmdClearDepthStencil;
- xglCmdResolveImageType CmdResolveImage;
- xglCmdSetEventType CmdSetEvent;
- xglCmdResetEventType CmdResetEvent;
- xglCmdWaitEventsType CmdWaitEvents;
- xglCmdPipelineBarrierType CmdPipelineBarrier;
- xglCmdBeginQueryType CmdBeginQuery;
- xglCmdEndQueryType CmdEndQuery;
- xglCmdResetQueryPoolType CmdResetQueryPool;
- xglCmdWriteTimestampType CmdWriteTimestamp;
- xglCmdInitAtomicCountersType CmdInitAtomicCounters;
- xglCmdLoadAtomicCountersType CmdLoadAtomicCounters;
- xglCmdSaveAtomicCountersType CmdSaveAtomicCounters;
- xglCreateFramebufferType CreateFramebuffer;
- xglCreateRenderPassType CreateRenderPass;
- xglCmdBeginRenderPassType CmdBeginRenderPass;
- xglCmdEndRenderPassType CmdEndRenderPass;
- xglDbgSetValidationLevelType DbgSetValidationLevel;
- xglDbgRegisterMsgCallbackType DbgRegisterMsgCallback;
- xglDbgUnregisterMsgCallbackType DbgUnregisterMsgCallback;
- xglDbgSetMessageFilterType DbgSetMessageFilter;
- xglDbgSetObjectTagType DbgSetObjectTag;
- xglDbgSetGlobalOptionType DbgSetGlobalOption;
- xglDbgSetDeviceOptionType DbgSetDeviceOption;
- xglCmdDbgMarkerBeginType CmdDbgMarkerBegin;
- xglCmdDbgMarkerEndType CmdDbgMarkerEnd;
+ vkGetProcAddrType GetProcAddr;
+ vkCreateInstanceType CreateInstance;
+ vkDestroyInstanceType DestroyInstance;
+ vkEnumerateGpusType EnumerateGpus;
+ vkGetGpuInfoType GetGpuInfo;
+ vkCreateDeviceType CreateDevice;
+ vkDestroyDeviceType DestroyDevice;
+ vkGetExtensionSupportType GetExtensionSupport;
+ vkEnumerateLayersType EnumerateLayers;
+ vkGetDeviceQueueType GetDeviceQueue;
+ vkQueueSubmitType QueueSubmit;
+ vkQueueAddMemReferenceType QueueAddMemReference;
+ vkQueueRemoveMemReferenceType QueueRemoveMemReference;
+ vkQueueWaitIdleType QueueWaitIdle;
+ vkDeviceWaitIdleType DeviceWaitIdle;
+ vkAllocMemoryType AllocMemory;
+ vkFreeMemoryType FreeMemory;
+ vkSetMemoryPriorityType SetMemoryPriority;
+ vkMapMemoryType MapMemory;
+ vkUnmapMemoryType UnmapMemory;
+ vkPinSystemMemoryType PinSystemMemory;
+ vkGetMultiGpuCompatibilityType GetMultiGpuCompatibility;
+ vkOpenSharedMemoryType OpenSharedMemory;
+ vkOpenSharedSemaphoreType OpenSharedSemaphore;
+ vkOpenPeerMemoryType OpenPeerMemory;
+ vkOpenPeerImageType OpenPeerImage;
+ vkDestroyObjectType DestroyObject;
+ vkGetObjectInfoType GetObjectInfo;
+ vkBindObjectMemoryType BindObjectMemory;
+ vkBindObjectMemoryRangeType BindObjectMemoryRange;
+ vkBindImageMemoryRangeType BindImageMemoryRange;
+ vkCreateFenceType CreateFence;
+ vkGetFenceStatusType GetFenceStatus;
+ vkResetFencesType ResetFences;
+ vkWaitForFencesType WaitForFences;
+ vkCreateSemaphoreType CreateSemaphore;
+ vkQueueSignalSemaphoreType QueueSignalSemaphore;
+ vkQueueWaitSemaphoreType QueueWaitSemaphore;
+ vkCreateEventType CreateEvent;
+ vkGetEventStatusType GetEventStatus;
+ vkSetEventType SetEvent;
+ vkResetEventType ResetEvent;
+ vkCreateQueryPoolType CreateQueryPool;
+ vkGetQueryPoolResultsType GetQueryPoolResults;
+ vkGetFormatInfoType GetFormatInfo;
+ vkCreateBufferType CreateBuffer;
+ vkCreateBufferViewType CreateBufferView;
+ vkCreateImageType CreateImage;
+ vkGetImageSubresourceInfoType GetImageSubresourceInfo;
+ vkCreateImageViewType CreateImageView;
+ vkCreateColorAttachmentViewType CreateColorAttachmentView;
+ vkCreateDepthStencilViewType CreateDepthStencilView;
+ vkCreateShaderType CreateShader;
+ vkCreateGraphicsPipelineType CreateGraphicsPipeline;
+ vkCreateGraphicsPipelineDerivativeType CreateGraphicsPipelineDerivative;
+ vkCreateComputePipelineType CreateComputePipeline;
+ vkStorePipelineType StorePipeline;
+ vkLoadPipelineType LoadPipeline;
+ vkLoadPipelineDerivativeType LoadPipelineDerivative;
+ vkCreateSamplerType CreateSampler;
+ vkCreateDescriptorSetLayoutType CreateDescriptorSetLayout;
+ vkCreateDescriptorSetLayoutChainType CreateDescriptorSetLayoutChain;
+ vkBeginDescriptorPoolUpdateType BeginDescriptorPoolUpdate;
+ vkEndDescriptorPoolUpdateType EndDescriptorPoolUpdate;
+ vkCreateDescriptorPoolType CreateDescriptorPool;
+ vkResetDescriptorPoolType ResetDescriptorPool;
+ vkAllocDescriptorSetsType AllocDescriptorSets;
+ vkClearDescriptorSetsType ClearDescriptorSets;
+ vkUpdateDescriptorsType UpdateDescriptors;
+ vkCreateDynamicViewportStateType CreateDynamicViewportState;
+ vkCreateDynamicRasterStateType CreateDynamicRasterState;
+ vkCreateDynamicColorBlendStateType CreateDynamicColorBlendState;
+ vkCreateDynamicDepthStencilStateType CreateDynamicDepthStencilState;
+ vkCreateCommandBufferType CreateCommandBuffer;
+ vkBeginCommandBufferType BeginCommandBuffer;
+ vkEndCommandBufferType EndCommandBuffer;
+ vkResetCommandBufferType ResetCommandBuffer;
+ vkCmdBindPipelineType CmdBindPipeline;
+ vkCmdBindDynamicStateObjectType CmdBindDynamicStateObject;
+ vkCmdBindDescriptorSetsType CmdBindDescriptorSets;
+ vkCmdBindVertexBufferType CmdBindVertexBuffer;
+ vkCmdBindIndexBufferType CmdBindIndexBuffer;
+ vkCmdDrawType CmdDraw;
+ vkCmdDrawIndexedType CmdDrawIndexed;
+ vkCmdDrawIndirectType CmdDrawIndirect;
+ vkCmdDrawIndexedIndirectType CmdDrawIndexedIndirect;
+ vkCmdDispatchType CmdDispatch;
+ vkCmdDispatchIndirectType CmdDispatchIndirect;
+ vkCmdCopyBufferType CmdCopyBuffer;
+ vkCmdCopyImageType CmdCopyImage;
+ vkCmdBlitImageType CmdBlitImage;
+ vkCmdCopyBufferToImageType CmdCopyBufferToImage;
+ vkCmdCopyImageToBufferType CmdCopyImageToBuffer;
+ vkCmdCloneImageDataType CmdCloneImageData;
+ vkCmdUpdateBufferType CmdUpdateBuffer;
+ vkCmdFillBufferType CmdFillBuffer;
+ vkCmdClearColorImageType CmdClearColorImage;
+ vkCmdClearDepthStencilType CmdClearDepthStencil;
+ vkCmdResolveImageType CmdResolveImage;
+ vkCmdSetEventType CmdSetEvent;
+ vkCmdResetEventType CmdResetEvent;
+ vkCmdWaitEventsType CmdWaitEvents;
+ vkCmdPipelineBarrierType CmdPipelineBarrier;
+ vkCmdBeginQueryType CmdBeginQuery;
+ vkCmdEndQueryType CmdEndQuery;
+ vkCmdResetQueryPoolType CmdResetQueryPool;
+ vkCmdWriteTimestampType CmdWriteTimestamp;
+ vkCmdInitAtomicCountersType CmdInitAtomicCounters;
+ vkCmdLoadAtomicCountersType CmdLoadAtomicCounters;
+ vkCmdSaveAtomicCountersType CmdSaveAtomicCounters;
+ vkCreateFramebufferType CreateFramebuffer;
+ vkCreateRenderPassType CreateRenderPass;
+ vkCmdBeginRenderPassType CmdBeginRenderPass;
+ vkCmdEndRenderPassType CmdEndRenderPass;
+ vkDbgSetValidationLevelType DbgSetValidationLevel;
+ vkDbgRegisterMsgCallbackType DbgRegisterMsgCallback;
+ vkDbgUnregisterMsgCallbackType DbgUnregisterMsgCallback;
+ vkDbgSetMessageFilterType DbgSetMessageFilter;
+ vkDbgSetObjectTagType DbgSetObjectTag;
+ vkDbgSetGlobalOptionType DbgSetGlobalOption;
+ vkDbgSetDeviceOptionType DbgSetDeviceOption;
+ vkCmdDbgMarkerBeginType CmdDbgMarkerBegin;
+ vkCmdDbgMarkerEndType CmdDbgMarkerEnd;
#if defined(__linux__) || defined(XCB_NVIDIA)
- xglWsiX11AssociateConnectionType WsiX11AssociateConnection;
- xglWsiX11GetMSCType WsiX11GetMSC;
- xglWsiX11CreatePresentableImageType WsiX11CreatePresentableImage;
- xglWsiX11QueuePresentType WsiX11QueuePresent;
+ vkWsiX11AssociateConnectionType WsiX11AssociateConnection;
+ vkWsiX11GetMSCType WsiX11GetMSC;
+ vkWsiX11CreatePresentableImageType WsiX11CreatePresentableImage;
+ vkWsiX11QueuePresentType WsiX11QueuePresent;
#endif // WIN32
-} XGL_LAYER_DISPATCH_TABLE;
+} VK_LAYER_DISPATCH_TABLE;
// LL node for tree of dbg callback functions
-typedef struct _XGL_LAYER_DBG_FUNCTION_NODE
+typedef struct _VK_LAYER_DBG_FUNCTION_NODE
{
- XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback;
+ VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback;
void *pUserData;
- struct _XGL_LAYER_DBG_FUNCTION_NODE *pNext;
-} XGL_LAYER_DBG_FUNCTION_NODE;
+ struct _VK_LAYER_DBG_FUNCTION_NODE *pNext;
+} VK_LAYER_DBG_FUNCTION_NODE;
-typedef enum _XGL_LAYER_DBG_ACTION
+typedef enum _VK_LAYER_DBG_ACTION
{
- XGL_DBG_LAYER_ACTION_IGNORE = 0x0,
- XGL_DBG_LAYER_ACTION_CALLBACK = 0x1,
- XGL_DBG_LAYER_ACTION_LOG_MSG = 0x2,
- XGL_DBG_LAYER_ACTION_BREAK = 0x4
-} XGL_LAYER_DBG_ACTION;
+ VK_DBG_LAYER_ACTION_IGNORE = 0x0,
+ VK_DBG_LAYER_ACTION_CALLBACK = 0x1,
+ VK_DBG_LAYER_ACTION_LOG_MSG = 0x2,
+ VK_DBG_LAYER_ACTION_BREAK = 0x4
+} VK_LAYER_DBG_ACTION;
-typedef enum _XGL_LAYER_DBG_REPORT_LEVEL
+typedef enum _VK_LAYER_DBG_REPORT_LEVEL
{
- XGL_DBG_LAYER_LEVEL_INFO = 0,
- XGL_DBG_LAYER_LEVEL_WARN,
- XGL_DBG_LAYER_LEVEL_PERF_WARN,
- XGL_DBG_LAYER_LEVEL_ERROR,
- XGL_DBG_LAYER_LEVEL_NONE,
-} XGL_LAYER_DBG_REPORT_LEVEL;
+ VK_DBG_LAYER_LEVEL_INFO = 0,
+ VK_DBG_LAYER_LEVEL_WARN,
+ VK_DBG_LAYER_LEVEL_PERF_WARN,
+ VK_DBG_LAYER_LEVEL_ERROR,
+ VK_DBG_LAYER_LEVEL_NONE,
+} VK_LAYER_DBG_REPORT_LEVEL;
// ------------------------------------------------------------------------------------------------
// API functions
//
-// File: xglPlatform.h
+// File: vkPlatform.h
//
/*
** Copyright (c) 2014 The Khronos Group Inc.
*/
-#ifndef __XGLPLATFORM_H__
-#define __XGLPLATFORM_H__
+#ifndef __VKPLATFORM_H__
+#define __VKPLATFORM_H__
#ifdef __cplusplus
extern "C"
// Ensure we don't pick up min/max macros from Winddef.h
#define NOMINMAX
- // On Windows, XGLAPI should equate to the __stdcall convention
- #define XGLAPI __stdcall
+ // On Windows, VKAPI should equate to the __stdcall convention
+ #define VKAPI __stdcall
// C99:
#ifndef __cplusplus
#define inline __inline
#endif // __cplusplus
#elif defined(__GNUC__)
- // On other platforms using GCC, XGLAPI stays undefined
- #define XGLAPI
+ // On other platforms using GCC, VKAPI stays undefined
+ #define VKAPI
#else
// Unsupported Platform!
#error "Unsupported OS Platform detected!"
#include <stddef.h>
-#if !defined(XGL_NO_STDINT_H)
+#if !defined(VK_NO_STDINT_H)
#if defined(_MSC_VER) && (_MSC_VER < 1600)
typedef signed __int8 int8_t;
typedef unsigned __int8 uint8_t;
#else
#include <stdint.h>
#endif
-#endif // !defined(XGL_NO_STDINT_H)
+#endif // !defined(VK_NO_STDINT_H)
-typedef uint64_t XGL_GPU_SIZE;
+typedef uint64_t VK_GPU_SIZE;
typedef uint32_t bool32_t;
-typedef uint32_t XGL_SAMPLE_MASK;
-typedef uint32_t XGL_FLAGS;
-typedef int32_t XGL_ENUM;
+typedef uint32_t VK_SAMPLE_MASK;
+typedef uint32_t VK_FLAGS;
+typedef int32_t VK_ENUM;
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
-#endif // __XGLPLATFORM_H__
+#endif // __VKPLATFORM_H__
/* IN DEVELOPMENT. DO NOT SHIP. */
-#ifndef __XGLWSIWINEXT_H__
-#define __XGLWSIWINEXT_H__
+#ifndef __VKWSIWINEXT_H__
+#define __VKWSIWINEXT_H__
// This is just to get windows to build.
// Need to replace with the declarations for Windows wsi.
-typedef void XGL_WSI_X11_CONNECTION_INFO;
+typedef void VK_WSI_X11_CONNECTION_INFO;
typedef unsigned int xcb_window_t;
typedef unsigned int xcb_randr_crtc_t;
-typedef void XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO;
-typedef void XGL_WSI_X11_PRESENT_INFO;
+typedef void VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO;
+typedef void VK_WSI_X11_PRESENT_INFO;
-#endif // __XGLWSIWINEXT_H__
+#endif // __VKWSIWINEXT_H__
/* IN DEVELOPMENT. DO NOT SHIP. */
-#ifndef __XGLWSIX11EXT_H__
-#define __XGLWSIX11EXT_H__
+#ifndef __VKWSIX11EXT_H__
+#define __VKWSIX11EXT_H__
#include <xcb/xcb.h>
#include <xcb/randr.h>
-#include "xgl.h"
+#include "vulkan.h"
#ifdef __cplusplus
extern "C"
{
#endif // __cplusplus
-typedef struct _XGL_WSI_X11_CONNECTION_INFO {
+typedef struct _VK_WSI_X11_CONNECTION_INFO {
xcb_connection_t* pConnection;
xcb_window_t root;
xcb_randr_provider_t provider;
-} XGL_WSI_X11_CONNECTION_INFO;
+} VK_WSI_X11_CONNECTION_INFO;
-typedef struct _XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO
+typedef struct _VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO
{
- XGL_FORMAT format;
- XGL_FLAGS usage; // XGL_IMAGE_USAGE_FLAGS
- XGL_EXTENT2D extent;
- XGL_FLAGS flags;
-} XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO;
+ VK_FORMAT format;
+ VK_FLAGS usage; // VK_IMAGE_USAGE_FLAGS
+ VK_EXTENT2D extent;
+ VK_FLAGS flags;
+} VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO;
-typedef struct _XGL_WSI_X11_PRESENT_INFO
+typedef struct _VK_WSI_X11_PRESENT_INFO
{
/* which window to present to */
xcb_window_t destWindow;
- XGL_IMAGE srcImage;
+ VK_IMAGE srcImage;
/**
* After the command buffers in the queue have been completed, if the MSC
* }
*
* In other words, either set \p target_msc to an absolute value (require
- * xglWsiX11GetMSC(), potentially a round-trip to the server, to get the
+ * vkWsiX11GetMSC(), potentially a round-trip to the server, to get the
* current MSC first), or set \p target_msc to zero and set a "swap
* interval".
*
* be flipped to.
*/
bool32_t flip;
-} XGL_WSI_X11_PRESENT_INFO;
+} VK_WSI_X11_PRESENT_INFO;
-typedef XGL_RESULT (XGLAPI *xglWsiX11AssociateConnectionType)(XGL_PHYSICAL_GPU gpu, const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo);
-typedef XGL_RESULT (XGLAPI *xglWsiX11GetMSCType)(XGL_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc);
-typedef XGL_RESULT (XGLAPI *xglWsiX11CreatePresentableImageType)(XGL_DEVICE device, const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem);
-typedef XGL_RESULT (XGLAPI *xglWsiX11QueuePresentType)(XGL_QUEUE queue, const XGL_WSI_X11_PRESENT_INFO* pPresentInfo, XGL_FENCE fence);
+typedef VK_RESULT (VKAPI *vkWsiX11AssociateConnectionType)(VK_PHYSICAL_GPU gpu, const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo);
+typedef VK_RESULT (VKAPI *vkWsiX11GetMSCType)(VK_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc);
+typedef VK_RESULT (VKAPI *vkWsiX11CreatePresentableImageType)(VK_DEVICE device, const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem);
+typedef VK_RESULT (VKAPI *vkWsiX11QueuePresentType)(VK_QUEUE queue, const VK_WSI_X11_PRESENT_INFO* pPresentInfo, VK_FENCE fence);
/**
* Associate an X11 connection with a GPU. This should be done before device
* creation. If the device is already created,
- * XGL_ERROR_DEVICE_ALREADY_CREATED is returned.
+ * VK_ERROR_DEVICE_ALREADY_CREATED is returned.
*
* Truth is, given a connection, we could find the associated GPU. But
* without having a GPU as the first parameter, the loader could not find the
* dispatch table.
*
- * This function is available when xglGetExtensionSupport says "XGL_WSI_X11"
+ * This function is available when vkGetExtensionSupport says "VK_WSI_X11"
* is supported.
*/
-XGL_RESULT XGLAPI xglWsiX11AssociateConnection(
- XGL_PHYSICAL_GPU gpu,
- const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo);
+VK_RESULT VKAPI vkWsiX11AssociateConnection(
+ VK_PHYSICAL_GPU gpu,
+ const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo);
/**
* Return the current MSC (Media Stream Counter, incremented for each vblank)
* of \p crtc. If crtc is \p XCB_NONE, a suitable CRTC is picked based on \p
* win.
*/
-XGL_RESULT XGLAPI xglWsiX11GetMSC(
- XGL_DEVICE device,
+VK_RESULT VKAPI vkWsiX11GetMSC(
+ VK_DEVICE device,
xcb_window_t window,
xcb_randr_crtc_t crtc,
uint64_t* pMsc);
/**
- * Create an XGL_IMAGE that can be presented. An XGL_GPU_MEMORY is created
+ * Create an VK_IMAGE that can be presented. An VK_GPU_MEMORY is created
* and bound automatically. The memory returned can only be used in
- * xglQueue[Add|Remove]MemReference. Destroying the memory or binding another memory to the
+ * vkQueue[Add|Remove]MemReference. Destroying the memory or binding another memory to the
* image is not allowed.
*/
-XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(
- XGL_DEVICE device,
- const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage,
- XGL_GPU_MEMORY* pMem);
+VK_RESULT VKAPI vkWsiX11CreatePresentableImage(
+ VK_DEVICE device,
+ const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage,
+ VK_GPU_MEMORY* pMem);
/**
* Present an image to an X11 window. The presentation always occurs after
* the command buffers in the queue have been completed, subject to other
- * parameters specified in XGL_WSI_X11_PRESENT_INFO.
+ * parameters specified in VK_WSI_X11_PRESENT_INFO.
*
* Fence is reached when the presentation occurs.
*/
-XGL_RESULT XGLAPI xglWsiX11QueuePresent(
- XGL_QUEUE queue,
- const XGL_WSI_X11_PRESENT_INFO* pPresentInfo,
- XGL_FENCE fence);
+VK_RESULT VKAPI vkWsiX11QueuePresent(
+ VK_QUEUE queue,
+ const VK_WSI_X11_PRESENT_INFO* pPresentInfo,
+ VK_FENCE fence);
#ifdef __cplusplus
} // extern "C"
#endif // __cplusplus
-#endif // __XGLWSIX11EXT_H__
+#endif // __VKWSIX11EXT_H__
cmake_minimum_required (VERSION 2.8.11)
-macro(run_xgl_helper subcmd)
+macro(run_vk_helper subcmd)
add_custom_command(OUTPUT ${ARGN}
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl_helper.py --${subcmd} ${PROJECT_SOURCE_DIR}/include/xgl.h --abs_out_dir ${CMAKE_CURRENT_BINARY_DIR}
- DEPENDS ${PROJECT_SOURCE_DIR}/xgl_helper.py ${PROJECT_SOURCE_DIR}/include/xgl.h
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/vk_helper.py --${subcmd} ${PROJECT_SOURCE_DIR}/include/vulkan.h --abs_out_dir ${CMAKE_CURRENT_BINARY_DIR}
+ DEPENDS ${PROJECT_SOURCE_DIR}/vk_helper.py ${PROJECT_SOURCE_DIR}/include/vulkan.h
)
endmacro()
-macro(run_xgl_layer_generate subcmd output)
+macro(run_vk_layer_generate subcmd output)
add_custom_command(OUTPUT ${output}
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-layer-generate.py ${subcmd} ${PROJECT_SOURCE_DIR}/include/xgl.h > ${output}
- DEPENDS ${PROJECT_SOURCE_DIR}/xgl-layer-generate.py ${PROJECT_SOURCE_DIR}/include/xgl.h ${PROJECT_SOURCE_DIR}/xgl.py
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-layer-generate.py ${subcmd} ${PROJECT_SOURCE_DIR}/include/vulkan.h > ${output}
+ DEPENDS ${PROJECT_SOURCE_DIR}/xgl-layer-generate.py ${PROJECT_SOURCE_DIR}/include/vulkan.h ${PROJECT_SOURCE_DIR}/xgl.py
)
endmacro()
if (WIN32)
- macro(add_xgl_layer target)
- add_custom_command(OUTPUT XGLLayer${target}.def
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py win-def-file XGLLayer${target} layer > XGLLayer${target}.def
+ macro(add_vk_layer target)
+ add_custom_command(OUTPUT VKLayer${target}.def
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py win-def-file VKLayer${target} layer > VKLayer${target}.def
DEPENDS ${PROJECT_SOURCE_DIR}/xgl-generate.py ${PROJECT_SOURCE_DIR}/xgl.py
)
- add_library(XGLLayer${target} SHARED ${ARGN} XGLLayer${target}.def)
- target_link_Libraries(XGLLayer${target} layer_utils)
- add_dependencies(XGLLayer${target} generate_xgl_layer_helpers)
- add_dependencies(XGLLayer${target} ${CMAKE_CURRENT_BINARY_DIR}/XGLLayer${target}.def)
- set_target_properties(XGLLayer${target} PROPERTIES LINK_FLAGS "/DEF:${CMAKE_CURRENT_BINARY_DIR}/XGLLayer${target}.def")
+ add_library(VKLayer${target} SHARED ${ARGN} VKLayer${target}.def)
+ target_link_Libraries(VKLayer${target} layer_utils)
+ add_dependencies(VKLayer${target} generate_vk_layer_helpers)
+ add_dependencies(VKLayer${target} ${CMAKE_CURRENT_BINARY_DIR}/VKLayer${target}.def)
+ set_target_properties(VKLayer${target} PROPERTIES LINK_FLAGS "/DEF:${CMAKE_CURRENT_BINARY_DIR}/VKLayer${target}.def")
endmacro()
else()
- macro(add_xgl_layer target)
- add_library(XGLLayer${target} SHARED ${ARGN})
- target_link_Libraries(XGLLayer${target} layer_utils)
- add_dependencies(XGLLayer${target} generate_xgl_layer_helpers)
- set_target_properties(XGLLayer${target} PROPERTIES LINK_FLAGS "-Wl,-Bsymbolic")
+ macro(add_vk_layer target)
+ add_library(VKLayer${target} SHARED ${ARGN})
+ target_link_Libraries(VKLayer${target} layer_utils)
+ add_dependencies(VKLayer${target} generate_vk_layer_helpers)
+ set_target_properties(VKLayer${target} PROPERTIES LINK_FLAGS "-Wl,-Bsymbolic")
endmacro()
endif()
)
if (WIN32)
- set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
- set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
+ set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS")
endif()
if (NOT WIN32)
set (CMAKE_CXX_FLAGS "-std=c++11")
- set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DXGL_PROTOTYPES -Wpointer-arith")
- set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -Wpointer-arith")
+ set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DVK_PROTOTYPES -Wpointer-arith")
+ set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -Wpointer-arith")
endif()
-add_custom_command(OUTPUT xgl_dispatch_table_helper.h
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py dispatch-table-ops layer > xgl_dispatch_table_helper.h
+add_custom_command(OUTPUT vk_dispatch_table_helper.h
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py dispatch-table-ops layer > vk_dispatch_table_helper.h
DEPENDS ${PROJECT_SOURCE_DIR}/xgl-generate.py ${PROJECT_SOURCE_DIR}/xgl.py)
-add_custom_command(OUTPUT xgl_generic_intercept_proc_helper.h
- COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py layer-intercept-proc > xgl_generic_intercept_proc_helper.h
+add_custom_command(OUTPUT vk_generic_intercept_proc_helper.h
+ COMMAND ${PYTHON_CMD} ${PROJECT_SOURCE_DIR}/xgl-generate.py layer-intercept-proc > vk_generic_intercept_proc_helper.h
DEPENDS ${PROJECT_SOURCE_DIR}/xgl-generate.py ${PROJECT_SOURCE_DIR}/xgl.py)
-run_xgl_helper(gen_enum_string_helper xgl_enum_string_helper.h)
-run_xgl_helper(gen_struct_wrappers
- xgl_struct_string_helper.h
- xgl_struct_string_helper_cpp.h
- xgl_struct_string_helper_no_addr.h
- xgl_struct_string_helper_no_addr_cpp.h
- xgl_struct_size_helper.h
- xgl_struct_size_helper.c
- xgl_struct_wrappers.h
- xgl_struct_wrappers.cpp
+run_vk_helper(gen_enum_string_helper vk_enum_string_helper.h)
+run_vk_helper(gen_struct_wrappers
+ vk_struct_string_helper.h
+ vk_struct_string_helper_cpp.h
+ vk_struct_string_helper_no_addr.h
+ vk_struct_string_helper_no_addr_cpp.h
+ vk_struct_size_helper.h
+ vk_struct_size_helper.c
+ vk_struct_wrappers.h
+ vk_struct_wrappers.cpp
)
-run_xgl_helper(gen_graphviz xgl_struct_graphviz_helper.h)
+run_vk_helper(gen_graphviz vk_struct_graphviz_helper.h)
-add_custom_target(generate_xgl_layer_helpers DEPENDS
- xgl_dispatch_table_helper.h
- xgl_generic_intercept_proc_helper.h
- xgl_enum_string_helper.h
- xgl_struct_string_helper.h
- xgl_struct_string_helper_no_addr.h
- xgl_struct_string_helper_cpp.h
- xgl_struct_string_helper_no_addr_cpp.h
- xgl_struct_size_helper.h
- xgl_struct_size_helper.c
- xgl_struct_wrappers.h
- xgl_struct_wrappers.cpp
- xgl_struct_graphviz_helper.h
+add_custom_target(generate_vk_layer_helpers DEPENDS
+ vk_dispatch_table_helper.h
+ vk_generic_intercept_proc_helper.h
+ vk_enum_string_helper.h
+ vk_struct_string_helper.h
+ vk_struct_string_helper_no_addr.h
+ vk_struct_string_helper_cpp.h
+ vk_struct_string_helper_no_addr_cpp.h
+ vk_struct_size_helper.h
+ vk_struct_size_helper.c
+ vk_struct_wrappers.h
+ vk_struct_wrappers.cpp
+ vk_struct_graphviz_helper.h
)
-run_xgl_layer_generate(Generic generic_layer.c)
-run_xgl_layer_generate(ApiDump api_dump.c)
-run_xgl_layer_generate(ApiDumpFile api_dump_file.c)
-run_xgl_layer_generate(ApiDumpNoAddr api_dump_no_addr.c)
-run_xgl_layer_generate(ApiDumpCpp api_dump.cpp)
-run_xgl_layer_generate(ApiDumpNoAddrCpp api_dump_no_addr.cpp)
-run_xgl_layer_generate(ObjectTracker object_track.c)
+run_vk_layer_generate(Generic generic_layer.c)
+run_vk_layer_generate(ApiDump api_dump.c)
+run_vk_layer_generate(ApiDumpFile api_dump_file.c)
+run_vk_layer_generate(ApiDumpNoAddr api_dump_no_addr.c)
+run_vk_layer_generate(ApiDumpCpp api_dump.cpp)
+run_vk_layer_generate(ApiDumpNoAddrCpp api_dump_no_addr.cpp)
+run_vk_layer_generate(ObjectTracker object_track.c)
add_library(layer_utils SHARED layers_config.cpp)
if (WIN32)
target_link_libraries(layer_utils)
endif()
-add_xgl_layer(Basic basic.cpp)
-add_xgl_layer(Multi multi.cpp)
-add_xgl_layer(DrawState draw_state.cpp)
-add_xgl_layer(MemTracker mem_tracker.cpp)
-add_xgl_layer(GlaveSnapshot glave_snapshot.c)
+add_vk_layer(Basic basic.cpp)
+add_vk_layer(Multi multi.cpp)
+add_vk_layer(DrawState draw_state.cpp)
+add_vk_layer(MemTracker mem_tracker.cpp)
+add_vk_layer(GlaveSnapshot glave_snapshot.c)
# generated
-add_xgl_layer(Generic generic_layer.c)
-add_xgl_layer(APIDump api_dump.c)
-add_xgl_layer(APIDumpFile api_dump_file.c)
-add_xgl_layer(APIDumpNoAddr api_dump_no_addr.c)
-add_xgl_layer(APIDumpCpp api_dump.cpp)
-add_xgl_layer(APIDumpNoAddrCpp api_dump_no_addr.cpp)
-add_xgl_layer(ObjectTracker object_track.c)
-add_xgl_layer(ParamChecker param_checker.cpp)
+add_vk_layer(Generic generic_layer.c)
+add_vk_layer(APIDump api_dump.c)
+add_vk_layer(APIDumpFile api_dump_file.c)
+add_vk_layer(APIDumpNoAddr api_dump_no_addr.c)
+add_vk_layer(APIDumpCpp api_dump.cpp)
+add_vk_layer(APIDumpNoAddrCpp api_dump_no_addr.cpp)
+add_vk_layer(ObjectTracker object_track.c)
+add_vk_layer(ParamChecker param_checker.cpp)
## Overview
-Layer libraries can be written to intercept or hook XGL entrypoints for various
-debug and validation purposes. One or more XGL entrypoints can be defined in your Layer
+Layer libraries can be written to intercept or hook VK entrypoints for various
+debug and validation purposes. One or more VK entrypoints can be defined in your Layer
library. Undefined entrypoints in the Layer library will be passed to the next Layer which
may be the driver. Multiple layer libraries can be chained (actually a hierarchy) together.
-xglEnumerateLayer can be called to list the available layer libraries. xglGetProcAddr is
+vkEnumerateLayer can be called to list the available layer libraries. vkGetProcAddr is
used internally by the Layers and ICD Loader to initialize dispatch tables. Layers are
-activated at xglCreateDevice time. xglCreateDevice createInfo struct is extended to allow
+activated at vkCreateDevice time. vkCreateDevice createInfo struct is extended to allow
a list of layers to be activated. Layer libraries can alternatively be LD\_PRELOADed depending
upon how they are implemented.
Note that some layers are code-generated and will therefore exist in the directory (build_dir)/layers
--include/xglLayer.h - header file for layer code.
+-include/vkLayer.h - header file for layer code.
### Templates
layer/Basic.cpp (name=Basic) simple example wrapping a few entrypoints. Shows layer features:
- Multiple dispatch tables for supporting multiple GPUs.
- Example layer extension function shown.
-- Layer extension advertised by xglGetExtension().
-- xglEnumerateLayers() supports loader layer name queries and call interception
+- Layer extension advertised by vkGetExtension().
+- vkEnumerateLayers() supports loader layer name queries and call interception
- Can be LD\_PRELOADed individually
layer/Multi.cpp (name=multi1:multi2) simple example showing multiple layers per library
-(build dir)/layer/generic_layer.c (name=Generic) - auto generated example wrapping all XGL entrypoints. Single global dispatch table. Can be LD\_PRELOADed.
+(build dir)/layer/generic_layer.c (name=Generic) - auto generated example wrapping all VK entrypoints. Single global dispatch table. Can be LD\_PRELOADed.
### Print API Calls and Parameter Values
(build dir)/layer/api_dump.c (name=APIDump) - print out API calls along with parameter values
(build dir)/layer/api_dump.cpp (name=APIDumpCpp) - same as above but uses c++ strings and i/o streams
-(build dir)/layer/api\_dump\_file.c (name=APIDumpFile) - Write API calls along with parameter values to xgl\_apidump.txt file.
+(build dir)/layer/api\_dump\_file.c (name=APIDumpFile) - Write API calls along with parameter values to vk\_apidump.txt file.
(build dir)/layer/api\_dump\_no\_addr.c (name=APIDumpNoAddr) - print out API calls along with parameter values but replace any variable addresses with the static string "addr".
(build dir)/layer/api\_dump\_no\_addr.cpp (name=APIDumpNoAddrCpp) - same as above but uses c++ strings and i/o streams
### Print Object Stats
-(build dir>/layer/object_track.c (name=ObjectTracker) - Print object CREATE/USE/DESTROY stats. Individually track objects by category. XGL\_OBJECT\_TYPE enum defined in object_track.h. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. Provides custom interface to query number of live objects of given type "XGL\_UINT64 objTrackGetObjectCount(XGL\_OBJECT\_TYPE type)" and a secondary call to return an array of those objects "XGL\_RESULT objTrackGetObjects(XGL\_OBJECT\_TYPE type, XGL\_UINT64 objCount, OBJTRACK\_NODE* pObjNodeArray)".
+(build dir>/layer/object_track.c (name=ObjectTracker) - Print object CREATE/USE/DESTROY stats. Individually track objects by category. VK\_OBJECT\_TYPE enum defined in object_track.h. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout. Provides custom interface to query number of live objects of given type "VK\_UINT64 objTrackGetObjectCount(VK\_OBJECT\_TYPE type)" and a secondary call to return an array of those objects "VK\_RESULT objTrackGetObjects(VK\_OBJECT\_TYPE type, VK\_UINT64 objCount, OBJTRACK\_NODE* pObjNodeArray)".
### Report Draw State
layer/draw\_state.c (name=DrawState) - DrawState reports the Descriptor Set, Pipeline State, and dynamic state at each Draw call. DrawState layer performs a number of validation checks on this state. Of primary interest is making sure that the resources bound to Descriptor Sets correctly align with the layout specified for the Set. If a Dbg callback function is registered, this layer will use callback function(s) for reporting, otherwise uses stdout.
## Using Layers
-1. Build XGL loader and i965 icd driver using normal steps (cmake and make)
-2. Place libXGLLayer<name>.so in the same directory as your XGL test or app:
+1. Build VK loader and i965 icd driver using normal steps (cmake and make)
+2. Place libVKLayer<name>.so in the same directory as your VK test or app:
- cp build/layer/libXGLLayerBasic.so build/layer/libXGLLayerGeneric.so build/tests
+ cp build/layer/libVKLayerBasic.so build/layer/libVKLayerGeneric.so build/tests
- This is required for the Icd loader to be able to scan and enumerate your library. Alternatively, use the LIBXGL\_LAYERS\_PATH environment variable to specify where the layer libraries reside.
+ This is required for the Icd loader to be able to scan and enumerate your library. Alternatively, use the LIBVK\_LAYERS\_PATH environment variable to specify where the layer libraries reside.
3. Specify which Layers to activate by using
-xglCreateDevice XGL\_LAYER\_CREATE\_INFO struct or environment variable LIBXGL\_LAYER\_NAMES
+vkCreateDevice VK\_LAYER\_CREATE\_INFO struct or environment variable LIBVK\_LAYER\_NAMES
- export LIBXGL\_LAYER\_NAMES=Basic:Generic
- cd build/tests; ./xglinfo
+ export LIBVK\_LAYER\_NAMES=Basic:Generic
+ cd build/tests; ./vkinfo
## Tips for writing new layers
-1. Must implement xglGetProcAddr() (aka GPA);
-2. Must have a local dispatch table to call next layer (see xglLayer.h);
-3. Should implement xglEnumerateLayers() returning layer name when gpu == NULL; otherwise layer name is extracted from library filename by the Loader;
+1. Must implement vkGetProcAddr() (aka GPA);
+2. Must have a local dispatch table to call next layer (see vkLayer.h);
+3. Should implement vkEnumerateLayers() returning layer name when gpu == NULL; otherwise layer name is extracted from library filename by the Loader;
4. gpu objects must be unwrapped (gpu->nextObject) when passed to next layer;
5. next layers GPA can be found in the wrapped gpu object;
6. Loader calls a layer's GPA first so initialization should occur here;
7. all entrypoints can be wrapped but only will be called after layer is activated
- via the first xglCreatDevice;
-8. entrypoint names can be any name as specified by the layers xglGetProcAddr
- implementation; exceptions are xglGetProcAddr and xglEnumerateLayers,
+ via the first vkCreatDevice;
+8. entrypoint names can be any name as specified by the layers vkGetProcAddr
+ implementation; exceptions are vkGetProcAddr and vkEnumerateLayers,
which must have the correct name since the Loader calls these entrypoints;
-9. entrypoint names must be exported to the dynamic loader with XGL\_LAYER\_EXPORT;
-10. For LD\_PRELOAD support: a)entrypoint names should be offical xgl names and
+9. entrypoint names must be exported to the dynamic loader with VK\_LAYER\_EXPORT;
+10. For LD\_PRELOAD support: a)entrypoint names should be offical vk names and
b) initialization should occur on any call with a gpu object (Loader type
- initialization must be done if implementing xglInitAndEnumerateGpus).
-11. Implement xglGetExtension() if you want to advertise a layer extension
+ initialization must be done if implementing vkInitAndEnumerateGpus).
+11. Implement vkGetExtension() if you want to advertise a layer extension
(only available after the layer is activated);
-12. Layer naming convention is camel case same name as in library: libXGLLayer<name>.so
+12. Layer naming convention is camel case same name as in library: libVKLayer<name>.so
13. For multiple layers in one library should implement a separate GetProcAddr for each
layer and export them to dynamic loader; function name is <layerName>GetProcAddr().
- Main xglGetProcAddr() should also be implemented.
+ Main vkGetProcAddr() should also be implemented.
## Status
### Current Features
-- scanning of available Layers during xglInitAndEnumerateGpus;
-- layer names retrieved via xglEnumerateLayers();
-- xglEnumerateLayers and xglGetProcAddr supported APIs in xgl.h, ICD loader and i965 driver;
+- scanning of available Layers during vkInitAndEnumerateGpus;
+- layer names retrieved via vkEnumerateLayers();
+- vkEnumerateLayers and vkGetProcAddr supported APIs in vulkan.h, ICD loader and i965 driver;
- multiple layers in a hierarchy supported;
- layer enumeration supported per GPU;
- layers activated per gpu and per icd driver: separate dispatch table and layer library list in loader for each gpu or icd driver;
-- activation via xglCreateDevice extension struct in CreateInfo or via env var (LIBXGL\_LAYER\_NAMES);
+- activation via vkCreateDevice extension struct in CreateInfo or via env var (LIBVK\_LAYER\_NAMES);
- layer libraries can be LD\_PRELOADed if implemented correctly;
### Current known issues
- Layers with multiple threads are not well tested and some layers likely to have issues. APIDump family of layers should be thread-safe.
- layer libraries (except Basic) don't support multiple dispatch tables for multi-gpus;
-- layer libraries not yet include loader init functionality for full LD\_PRELOAD of entire API including xglInitAndEnumerateGpus;
-- Since Layers aren't activated until xglCreateDevice, any calls to xglGetExtension() will not report layer extensions unless implemented in the layer;
-- layer extensions do NOT need to be enabled in xglCreateDevice to be available;
+- layer libraries not yet include loader init functionality for full LD\_PRELOAD of entire API including vkInitAndEnumerateGpus;
+- Since Layers aren't activated until vkCreateDevice, any calls to vkGetExtension() will not report layer extensions unless implemented in the layer;
+- layer extensions do NOT need to be enabled in vkCreateDevice to be available;
- no support for apps registering layers, must be discovered via initial scan
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <assert.h>
#include <unordered_map>
#include "loader_platform.h"
-#include "xgl_dispatch_table_helper.h"
-#include "xglLayer.h"
+#include "vk_dispatch_table_helper.h"
+#include "vkLayer.h"
// The following is #included again to catch certain OS-specific functions
// being used:
#include "loader_platform.h"
-static std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *> tableMap;
+static std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *> tableMap;
-static XGL_LAYER_DISPATCH_TABLE * initLayerTable(const XGL_BASE_LAYER_OBJECT *gpuw)
+static VK_LAYER_DISPATCH_TABLE * initLayerTable(const VK_BASE_LAYER_OBJECT *gpuw)
{
- XGL_LAYER_DISPATCH_TABLE *pTable;
+ VK_LAYER_DISPATCH_TABLE *pTable;
assert(gpuw);
- std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap.find((void *) gpuw);
+ std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap.find((void *) gpuw);
if (it == tableMap.end())
{
- pTable = new XGL_LAYER_DISPATCH_TABLE;
+ pTable = new VK_LAYER_DISPATCH_TABLE;
tableMap[(void *) gpuw] = pTable;
} else
{
return it->second;
}
- layer_initialize_dispatch_table(pTable, gpuw->pGPA, (XGL_PHYSICAL_GPU) gpuw->nextObject);
+ layer_initialize_dispatch_table(pTable, gpuw->pGPA, (VK_PHYSICAL_GPU) gpuw->nextObject);
return pTable;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglLayerExtension1(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkLayerExtension1(VK_DEVICE device)
{
- printf("In xglLayerExtension1() call w/ device: %p\n", (void*)device);
- printf("xglLayerExtension1 returning SUCCESS\n");
- return XGL_SUCCESS;
+ printf("In vkLayerExtension1() call w/ device: %p\n", (void*)device);
+ printf("vkLayerExtension1 returning SUCCESS\n");
+ return VK_SUCCESS;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_RESULT result;
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_RESULT result;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
/* This entrypoint is NOT going to init it's own dispatch table since loader calls here early */
- if (!strncmp(pExtName, "xglLayerExtension1", strlen("xglLayerExtension1")))
+ if (!strncmp(pExtName, "vkLayerExtension1", strlen("vkLayerExtension1")))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (!strncmp(pExtName, "Basic", strlen("Basic")))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (!tableMap.empty() && (tableMap.find(gpuw) != tableMap.end()))
{
- printf("At start of wrapped xglGetExtensionSupport() call w/ gpu: %p\n", (void*)gpu);
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap[gpuw];
- result = pTable->GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
- printf("Completed wrapped xglGetExtensionSupport() call w/ gpu: %p\n", (void*)gpu);
+ printf("At start of wrapped vkGetExtensionSupport() call w/ gpu: %p\n", (void*)gpu);
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap[gpuw];
+ result = pTable->GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ printf("Completed wrapped vkGetExtensionSupport() call w/ gpu: %p\n", (void*)gpu);
} else
{
- result = XGL_ERROR_INVALID_EXTENSION;
+ result = VK_ERROR_INVALID_EXTENSION;
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap[gpuw];
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap[gpuw];
- printf("At start of wrapped xglCreateDevice() call w/ gpu: %p\n", (void*)gpu);
- XGL_RESULT result = pTable->CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ printf("At start of wrapped vkCreateDevice() call w/ gpu: %p\n", (void*)gpu);
+ VK_RESULT result = pTable->CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
// create a mapping for the device object into the dispatch table
tableMap.emplace(*pDevice, pTable);
- printf("Completed wrapped xglCreateDevice() call w/ pDevice, Device %p: %p\n", (void*)pDevice, (void *) *pDevice);
+ printf("Completed wrapped vkCreateDevice() call w/ pDevice, Device %p: %p\n", (void*)pDevice, (void *) *pDevice);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(XGL_DEVICE device, XGL_FORMAT format, XGL_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFormatInfo(VK_DEVICE device, VK_FORMAT format, VK_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap[device];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap[device];
- printf("At start of wrapped xglGetFormatInfo() call w/ device: %p\n", (void*)device);
- XGL_RESULT result = pTable->GetFormatInfo(device, format, infoType, pDataSize, pData);
- printf("Completed wrapped xglGetFormatInfo() call w/ device: %p\n", (void*)device);
+ printf("At start of wrapped vkGetFormatInfo() call w/ device: %p\n", (void*)device);
+ VK_RESULT result = pTable->GetFormatInfo(device, format, infoType, pDataSize, pData);
+ printf("Completed wrapped vkGetFormatInfo() call w/ device: %p\n", (void*)device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
if (gpu != NULL)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = initLayerTable(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = initLayerTable(gpuw);
- printf("At start of wrapped xglEnumerateLayers() call w/ gpu: %p\n", gpu);
- XGL_RESULT result = pTable->EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ printf("At start of wrapped vkEnumerateLayers() call w/ gpu: %p\n", gpu);
+ VK_RESULT result = pTable->EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
return result;
} else
{
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL || pReserved == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// Example of a layer that is only compatible with Intel's GPUs
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT*) pReserved;
- xglGetGpuInfoType fpGetGpuInfo;
- XGL_PHYSICAL_GPU_PROPERTIES gpuProps;
- size_t dataSize = sizeof(XGL_PHYSICAL_GPU_PROPERTIES);
- fpGetGpuInfo = (xglGetGpuInfoType) gpuw->pGPA((XGL_PHYSICAL_GPU) gpuw->nextObject, "xglGetGpuInfo");
- fpGetGpuInfo((XGL_PHYSICAL_GPU) gpuw->nextObject, XGL_INFO_TYPE_PHYSICAL_GPU_PROPERTIES, &dataSize, &gpuProps);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT*) pReserved;
+ vkGetGpuInfoType fpGetGpuInfo;
+ VK_PHYSICAL_GPU_PROPERTIES gpuProps;
+ size_t dataSize = sizeof(VK_PHYSICAL_GPU_PROPERTIES);
+ fpGetGpuInfo = (vkGetGpuInfoType) gpuw->pGPA((VK_PHYSICAL_GPU) gpuw->nextObject, "vkGetGpuInfo");
+ fpGetGpuInfo((VK_PHYSICAL_GPU) gpuw->nextObject, VK_INFO_TYPE_PHYSICAL_GPU_PROPERTIES, &dataSize, &gpuProps);
if (gpuProps.vendorId == 0x8086)
{
*pOutLayerCount = 1;
{
*pOutLayerCount = 0;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT void * XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* pName)
+VK_LAYER_EXPORT void * VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* pName)
{
if (gpu == NULL)
return NULL;
- initLayerTable((const XGL_BASE_LAYER_OBJECT *) gpu);
-
- if (!strncmp("xglGetProcAddr", pName, sizeof("xglGetProcAddr")))
- return (void *) xglGetProcAddr;
- else if (!strncmp("xglCreateDevice", pName, sizeof ("xglCreateDevice")))
- return (void *) xglCreateDevice;
- else if (!strncmp("xglGetExtensionSupport", pName, sizeof ("xglGetExtensionSupport")))
- return (void *) xglGetExtensionSupport;
- else if (!strncmp("xglEnumerateLayers", pName, sizeof ("xglEnumerateLayers")))
- return (void *) xglEnumerateLayers;
- else if (!strncmp("xglGetFormatInfo", pName, sizeof ("xglGetFormatInfo")))
- return (void *) xglGetFormatInfo;
- else if (!strncmp("xglLayerExtension1", pName, sizeof("xglLayerExtension1")))
- return (void *) xglLayerExtension1;
+ initLayerTable((const VK_BASE_LAYER_OBJECT *) gpu);
+
+ if (!strncmp("vkGetProcAddr", pName, sizeof("vkGetProcAddr")))
+ return (void *) vkGetProcAddr;
+ else if (!strncmp("vkCreateDevice", pName, sizeof ("vkCreateDevice")))
+ return (void *) vkCreateDevice;
+ else if (!strncmp("vkGetExtensionSupport", pName, sizeof ("vkGetExtensionSupport")))
+ return (void *) vkGetExtensionSupport;
+ else if (!strncmp("vkEnumerateLayers", pName, sizeof ("vkEnumerateLayers")))
+ return (void *) vkEnumerateLayers;
+ else if (!strncmp("vkGetFormatInfo", pName, sizeof ("vkGetFormatInfo")))
+ return (void *) vkGetFormatInfo;
+ else if (!strncmp("vkLayerExtension1", pName, sizeof("vkLayerExtension1")))
+ return (void *) vkLayerExtension1;
else {
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU) gpuw->nextObject, pName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU) gpuw->nextObject, pName);
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <unordered_map>
#include "loader_platform.h"
-#include "xgl_dispatch_table_helper.h"
-#include "xgl_struct_string_helper_cpp.h"
+#include "vk_dispatch_table_helper.h"
+#include "vk_struct_string_helper_cpp.h"
#pragma GCC diagnostic ignored "-Wwrite-strings"
-#include "xgl_struct_graphviz_helper.h"
+#include "vk_struct_graphviz_helper.h"
#pragma GCC diagnostic warning "-Wwrite-strings"
-#include "xgl_struct_size_helper.h"
+#include "vk_struct_size_helper.h"
#include "draw_state.h"
#include "layers_config.h"
// The following is #included again to catch certain OS-specific functions
#include "loader_platform.h"
#include "layers_msg.h"
-unordered_map<XGL_SAMPLER, SAMPLER_NODE*> sampleMap;
-unordered_map<XGL_IMAGE_VIEW, IMAGE_NODE*> imageMap;
-unordered_map<XGL_BUFFER_VIEW, BUFFER_NODE*> bufferMap;
-unordered_map<XGL_DYNAMIC_STATE_OBJECT, DYNAMIC_STATE_NODE*> dynamicStateMap;
-unordered_map<XGL_PIPELINE, PIPELINE_NODE*> pipelineMap;
-unordered_map<XGL_DESCRIPTOR_POOL, POOL_NODE*> poolMap;
-unordered_map<XGL_DESCRIPTOR_SET, SET_NODE*> setMap;
-unordered_map<XGL_DESCRIPTOR_SET_LAYOUT, LAYOUT_NODE*> layoutMap;
+unordered_map<VK_SAMPLER, SAMPLER_NODE*> sampleMap;
+unordered_map<VK_IMAGE_VIEW, IMAGE_NODE*> imageMap;
+unordered_map<VK_BUFFER_VIEW, BUFFER_NODE*> bufferMap;
+unordered_map<VK_DYNAMIC_STATE_OBJECT, DYNAMIC_STATE_NODE*> dynamicStateMap;
+unordered_map<VK_PIPELINE, PIPELINE_NODE*> pipelineMap;
+unordered_map<VK_DESCRIPTOR_POOL, POOL_NODE*> poolMap;
+unordered_map<VK_DESCRIPTOR_SET, SET_NODE*> setMap;
+unordered_map<VK_DESCRIPTOR_SET_LAYOUT, LAYOUT_NODE*> layoutMap;
// Map for layout chains
-unordered_map<XGL_CMD_BUFFER, GLOBAL_CB_NODE*> cmdBufferMap;
-unordered_map<XGL_RENDER_PASS, XGL_RENDER_PASS_CREATE_INFO*> renderPassMap;
-unordered_map<XGL_FRAMEBUFFER, XGL_FRAMEBUFFER_CREATE_INFO*> frameBufferMap;
+unordered_map<VK_CMD_BUFFER, GLOBAL_CB_NODE*> cmdBufferMap;
+unordered_map<VK_RENDER_PASS, VK_RENDER_PASS_CREATE_INFO*> renderPassMap;
+unordered_map<VK_FRAMEBUFFER, VK_FRAMEBUFFER_CREATE_INFO*> frameBufferMap;
-static XGL_LAYER_DISPATCH_TABLE nextTable;
-static XGL_BASE_LAYER_OBJECT *pCurObj;
+static VK_LAYER_DISPATCH_TABLE nextTable;
+static VK_BASE_LAYER_OBJECT *pCurObj;
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(g_initOnce);
// TODO : This can be much smarter, using separate locks for separate global data
static int globalLockInitialized = 0;
}
// Block of code at start here for managing/tracking Pipeline state that this layer cares about
// Just track 2 shaders for now
-#define XGL_NUM_GRAPHICS_SHADERS XGL_SHADER_STAGE_COMPUTE
+#define VK_NUM_GRAPHICS_SHADERS VK_SHADER_STAGE_COMPUTE
#define MAX_SLOTS 2048
#define NUM_COMMAND_BUFFERS_TO_DISPLAY 10
// Then need to synchronize the accesses based on cmd buffer so that if I'm reading state on one cmd buffer, updates
// to that same cmd buffer by separate thread are not changing state from underneath us
// Track the last cmd buffer touched by this thread
-static XGL_CMD_BUFFER g_lastCmdBuffer[MAX_TID] = {NULL};
+static VK_CMD_BUFFER g_lastCmdBuffer[MAX_TID] = {NULL};
// Track the last group of CBs touched for displaying to dot file
static GLOBAL_CB_NODE* g_pLastTouchedCB[NUM_COMMAND_BUFFERS_TO_DISPLAY] = {NULL};
static uint32_t g_lastTouchedCBIndex = 0;
// Track the last global DrawState of interest touched by any thread
static GLOBAL_CB_NODE* g_lastGlobalCB = NULL;
static PIPELINE_NODE* g_lastBoundPipeline = NULL;
-static DYNAMIC_STATE_NODE* g_lastBoundDynamicState[XGL_NUM_STATE_BIND_POINT] = {NULL};
-static XGL_DESCRIPTOR_SET g_lastBoundDescriptorSet = NULL;
+static DYNAMIC_STATE_NODE* g_lastBoundDynamicState[VK_NUM_STATE_BIND_POINT] = {NULL};
+static VK_DESCRIPTOR_SET g_lastBoundDescriptorSet = NULL;
#define MAX_BINDING 0xFFFFFFFF // Default vtxBinding value in CB Node to identify if no vtxBinding set
-//static DYNAMIC_STATE_NODE* g_pDynamicStateHead[XGL_NUM_STATE_BIND_POINT] = {0};
+//static DYNAMIC_STATE_NODE* g_pDynamicStateHead[VK_NUM_STATE_BIND_POINT] = {0};
-static void insertDynamicState(const XGL_DYNAMIC_STATE_OBJECT state, const GENERIC_HEADER* pCreateInfo, XGL_STATE_BIND_POINT bindPoint)
+static void insertDynamicState(const VK_DYNAMIC_STATE_OBJECT state, const GENERIC_HEADER* pCreateInfo, VK_STATE_BIND_POINT bindPoint)
{
- XGL_DYNAMIC_VP_STATE_CREATE_INFO* pVPCI = NULL;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO* pVPCI = NULL;
size_t scSize = 0;
size_t vpSize = 0;
loader_platform_thread_lock_mutex(&globalLock);
DYNAMIC_STATE_NODE* pStateNode = new DYNAMIC_STATE_NODE;
pStateNode->stateObj = state;
switch (pCreateInfo->sType) {
- case XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO:
- memcpy(&pStateNode->create_info, pCreateInfo, sizeof(XGL_DYNAMIC_VP_STATE_CREATE_INFO));
- pVPCI = (XGL_DYNAMIC_VP_STATE_CREATE_INFO*)pCreateInfo;
- pStateNode->create_info.vpci.pScissors = new XGL_RECT[pStateNode->create_info.vpci.viewportAndScissorCount];
- pStateNode->create_info.vpci.pViewports = new XGL_VIEWPORT[pStateNode->create_info.vpci.viewportAndScissorCount];
- scSize = pVPCI->viewportAndScissorCount * sizeof(XGL_RECT);
- vpSize = pVPCI->viewportAndScissorCount * sizeof(XGL_VIEWPORT);
+ case VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO:
+ memcpy(&pStateNode->create_info, pCreateInfo, sizeof(VK_DYNAMIC_VP_STATE_CREATE_INFO));
+ pVPCI = (VK_DYNAMIC_VP_STATE_CREATE_INFO*)pCreateInfo;
+ pStateNode->create_info.vpci.pScissors = new VK_RECT[pStateNode->create_info.vpci.viewportAndScissorCount];
+ pStateNode->create_info.vpci.pViewports = new VK_VIEWPORT[pStateNode->create_info.vpci.viewportAndScissorCount];
+ scSize = pVPCI->viewportAndScissorCount * sizeof(VK_RECT);
+ vpSize = pVPCI->viewportAndScissorCount * sizeof(VK_VIEWPORT);
memcpy((void*)pStateNode->create_info.vpci.pScissors, pVPCI->pScissors, scSize);
memcpy((void*)pStateNode->create_info.vpci.pViewports, pVPCI->pViewports, vpSize);
break;
- case XGL_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO:
- memcpy(&pStateNode->create_info, pCreateInfo, sizeof(XGL_DYNAMIC_RS_STATE_CREATE_INFO));
+ case VK_STRUCTURE_TYPE_DYNAMIC_RS_STATE_CREATE_INFO:
+ memcpy(&pStateNode->create_info, pCreateInfo, sizeof(VK_DYNAMIC_RS_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO:
- memcpy(&pStateNode->create_info, pCreateInfo, sizeof(XGL_DYNAMIC_CB_STATE_CREATE_INFO));
+ case VK_STRUCTURE_TYPE_DYNAMIC_CB_STATE_CREATE_INFO:
+ memcpy(&pStateNode->create_info, pCreateInfo, sizeof(VK_DYNAMIC_CB_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO:
- memcpy(&pStateNode->create_info, pCreateInfo, sizeof(XGL_DYNAMIC_DS_STATE_CREATE_INFO));
+ case VK_STRUCTURE_TYPE_DYNAMIC_DS_STATE_CREATE_INFO:
+ memcpy(&pStateNode->create_info, pCreateInfo, sizeof(VK_DYNAMIC_DS_STATE_CREATE_INFO));
break;
default:
assert(0);
// Free all allocated nodes for Dynamic State objs
static void freeDynamicState()
{
- for (unordered_map<XGL_DYNAMIC_STATE_OBJECT, DYNAMIC_STATE_NODE*>::iterator ii=dynamicStateMap.begin(); ii!=dynamicStateMap.end(); ++ii) {
- if (XGL_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO == (*ii).second->create_info.vpci.sType) {
+ for (unordered_map<VK_DYNAMIC_STATE_OBJECT, DYNAMIC_STATE_NODE*>::iterator ii=dynamicStateMap.begin(); ii!=dynamicStateMap.end(); ++ii) {
+ if (VK_STRUCTURE_TYPE_DYNAMIC_VP_STATE_CREATE_INFO == (*ii).second->create_info.vpci.sType) {
delete[] (*ii).second->create_info.vpci.pScissors;
delete[] (*ii).second->create_info.vpci.pViewports;
}
// Free all sampler nodes
static void freeSamplers()
{
- for (unordered_map<XGL_SAMPLER, SAMPLER_NODE*>::iterator ii=sampleMap.begin(); ii!=sampleMap.end(); ++ii) {
+ for (unordered_map<VK_SAMPLER, SAMPLER_NODE*>::iterator ii=sampleMap.begin(); ii!=sampleMap.end(); ++ii) {
delete (*ii).second;
}
}
-static XGL_IMAGE_VIEW_CREATE_INFO* getImageViewCreateInfo(XGL_IMAGE_VIEW view)
+static VK_IMAGE_VIEW_CREATE_INFO* getImageViewCreateInfo(VK_IMAGE_VIEW view)
{
loader_platform_thread_lock_mutex(&globalLock);
if (imageMap.find(view) == imageMap.end()) {
// Free all image nodes
static void freeImages()
{
- for (unordered_map<XGL_IMAGE_VIEW, IMAGE_NODE*>::iterator ii=imageMap.begin(); ii!=imageMap.end(); ++ii) {
+ for (unordered_map<VK_IMAGE_VIEW, IMAGE_NODE*>::iterator ii=imageMap.begin(); ii!=imageMap.end(); ++ii) {
delete (*ii).second;
}
}
-static XGL_BUFFER_VIEW_CREATE_INFO* getBufferViewCreateInfo(XGL_BUFFER_VIEW view)
+static VK_BUFFER_VIEW_CREATE_INFO* getBufferViewCreateInfo(VK_BUFFER_VIEW view)
{
loader_platform_thread_lock_mutex(&globalLock);
if (bufferMap.find(view) == bufferMap.end()) {
// Free all buffer nodes
static void freeBuffers()
{
- for (unordered_map<XGL_BUFFER_VIEW, BUFFER_NODE*>::iterator ii=bufferMap.begin(); ii!=bufferMap.end(); ++ii) {
+ for (unordered_map<VK_BUFFER_VIEW, BUFFER_NODE*>::iterator ii=bufferMap.begin(); ii!=bufferMap.end(); ++ii) {
delete (*ii).second;
}
}
-static GLOBAL_CB_NODE* getCBNode(XGL_CMD_BUFFER cb);
+static GLOBAL_CB_NODE* getCBNode(VK_CMD_BUFFER cb);
-static void updateCBTracking(XGL_CMD_BUFFER cb)
+static void updateCBTracking(VK_CMD_BUFFER cb)
{
g_lastCmdBuffer[getTIDIndex()] = cb;
GLOBAL_CB_NODE* pCB = getCBNode(cb);
}
// Print the last bound dynamic state
-static void printDynamicState(const XGL_CMD_BUFFER cb)
+static void printDynamicState(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
loader_platform_thread_lock_mutex(&globalLock);
char str[4*1024];
- for (uint32_t i = 0; i < XGL_NUM_STATE_BIND_POINT; i++) {
+ for (uint32_t i = 0; i < VK_NUM_STATE_BIND_POINT; i++) {
if (pCB->lastBoundDynamicState[i]) {
- sprintf(str, "Reporting CreateInfo for currently bound %s object %p", string_XGL_STATE_BIND_POINT((XGL_STATE_BIND_POINT)i), pCB->lastBoundDynamicState[i]->stateObj);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pCB->lastBoundDynamicState[i]->stateObj, 0, DRAWSTATE_NONE, "DS", str);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pCB->lastBoundDynamicState[i]->stateObj, 0, DRAWSTATE_NONE, "DS", dynamic_display(pCB->lastBoundDynamicState[i]->pCreateInfo, " ").c_str());
+ sprintf(str, "Reporting CreateInfo for currently bound %s object %p", string_VK_STATE_BIND_POINT((VK_STATE_BIND_POINT)i), pCB->lastBoundDynamicState[i]->stateObj);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pCB->lastBoundDynamicState[i]->stateObj, 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pCB->lastBoundDynamicState[i]->stateObj, 0, DRAWSTATE_NONE, "DS", dynamic_display(pCB->lastBoundDynamicState[i]->pCreateInfo, " ").c_str());
break;
}
else {
- sprintf(str, "No dynamic state of type %s bound", string_XGL_STATE_BIND_POINT((XGL_STATE_BIND_POINT)i));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "No dynamic state of type %s bound", string_VK_STATE_BIND_POINT((VK_STATE_BIND_POINT)i));
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", str);
}
}
loader_platform_thread_unlock_mutex(&globalLock);
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
}
// Retrieve pipeline node ptr for given pipeline object
-static PIPELINE_NODE* getPipeline(XGL_PIPELINE pipeline)
+static PIPELINE_NODE* getPipeline(VK_PIPELINE pipeline)
{
loader_platform_thread_lock_mutex(&globalLock);
if (pipelineMap.find(pipeline) == pipelineMap.end()) {
}
// For given sampler, return a ptr to its Create Info struct, or NULL if sampler not found
-static XGL_SAMPLER_CREATE_INFO* getSamplerCreateInfo(const XGL_SAMPLER sampler)
+static VK_SAMPLER_CREATE_INFO* getSamplerCreateInfo(const VK_SAMPLER sampler)
{
loader_platform_thread_lock_mutex(&globalLock);
if (sampleMap.find(sampler) == sampleMap.end()) {
// Init the pipeline mapping info based on pipeline create info LL tree
// Threading note : Calls to this function should wrapped in mutex
-static void initPipeline(PIPELINE_NODE* pPipeline, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo)
+static void initPipeline(PIPELINE_NODE* pPipeline, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo)
{
// First init create info, we'll shadow the structs as we go down the tree
// TODO : Validate that no create info is incorrectly replicated
- memcpy(&pPipeline->graphicsPipelineCI, pCreateInfo, sizeof(XGL_GRAPHICS_PIPELINE_CREATE_INFO));
+ memcpy(&pPipeline->graphicsPipelineCI, pCreateInfo, sizeof(VK_GRAPHICS_PIPELINE_CREATE_INFO));
GENERIC_HEADER* pTrav = (GENERIC_HEADER*)pCreateInfo->pNext;
GENERIC_HEADER* pPrev = (GENERIC_HEADER*)&pPipeline->graphicsPipelineCI; // Hold prev ptr to tie chain of structs together
size_t bufferSize = 0;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO* pVICI = NULL;
- XGL_PIPELINE_CB_STATE_CREATE_INFO* pCBCI = NULL;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO* pTmpPSSCI = NULL;
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO* pVICI = NULL;
+ VK_PIPELINE_CB_STATE_CREATE_INFO* pCBCI = NULL;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO* pTmpPSSCI = NULL;
while (pTrav) {
switch (pTrav->sType) {
- case XGL_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO:
- pTmpPSSCI = (XGL_PIPELINE_SHADER_STAGE_CREATE_INFO*)pTrav;
+ case VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO:
+ pTmpPSSCI = (VK_PIPELINE_SHADER_STAGE_CREATE_INFO*)pTrav;
switch (pTmpPSSCI->shader.stage) {
- case XGL_SHADER_STAGE_VERTEX:
+ case VK_SHADER_STAGE_VERTEX:
pPrev->pNext = &pPipeline->vsCI;
pPrev = (GENERIC_HEADER*)&pPipeline->vsCI;
- memcpy(&pPipeline->vsCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->vsCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_TESS_CONTROL:
+ case VK_SHADER_STAGE_TESS_CONTROL:
pPrev->pNext = &pPipeline->tcsCI;
pPrev = (GENERIC_HEADER*)&pPipeline->tcsCI;
- memcpy(&pPipeline->tcsCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->tcsCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_TESS_EVALUATION:
+ case VK_SHADER_STAGE_TESS_EVALUATION:
pPrev->pNext = &pPipeline->tesCI;
pPrev = (GENERIC_HEADER*)&pPipeline->tesCI;
- memcpy(&pPipeline->tesCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->tesCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_GEOMETRY:
+ case VK_SHADER_STAGE_GEOMETRY:
pPrev->pNext = &pPipeline->gsCI;
pPrev = (GENERIC_HEADER*)&pPipeline->gsCI;
- memcpy(&pPipeline->gsCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->gsCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_FRAGMENT:
+ case VK_SHADER_STAGE_FRAGMENT:
pPrev->pNext = &pPipeline->fsCI;
pPrev = (GENERIC_HEADER*)&pPipeline->fsCI;
- memcpy(&pPipeline->fsCI, pTmpPSSCI, sizeof(XGL_PIPELINE_SHADER_STAGE_CREATE_INFO));
+ memcpy(&pPipeline->fsCI, pTmpPSSCI, sizeof(VK_PIPELINE_SHADER_STAGE_CREATE_INFO));
break;
- case XGL_SHADER_STAGE_COMPUTE:
- // TODO : Flag error, CS is specified through XGL_COMPUTE_PIPELINE_CREATE_INFO
+ case VK_SHADER_STAGE_COMPUTE:
+ // TODO : Flag error, CS is specified through VK_COMPUTE_PIPELINE_CREATE_INFO
break;
default:
// TODO : Flag error
break;
}
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_CREATE_INFO:
pPrev->pNext = &pPipeline->vertexInputCI;
pPrev = (GENERIC_HEADER*)&pPipeline->vertexInputCI;
- memcpy((void*)&pPipeline->vertexInputCI, pTrav, sizeof(XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO));
+ memcpy((void*)&pPipeline->vertexInputCI, pTrav, sizeof(VK_PIPELINE_VERTEX_INPUT_CREATE_INFO));
// Copy embedded ptrs
- pVICI = (XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav;
+ pVICI = (VK_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav;
pPipeline->vtxBindingCount = pVICI->bindingCount;
if (pPipeline->vtxBindingCount) {
- pPipeline->pVertexBindingDescriptions = new XGL_VERTEX_INPUT_BINDING_DESCRIPTION[pPipeline->vtxBindingCount];
- bufferSize = pPipeline->vtxBindingCount * sizeof(XGL_VERTEX_INPUT_BINDING_DESCRIPTION);
- memcpy((void*)pPipeline->pVertexBindingDescriptions, ((XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav)->pVertexAttributeDescriptions, bufferSize);
+ pPipeline->pVertexBindingDescriptions = new VK_VERTEX_INPUT_BINDING_DESCRIPTION[pPipeline->vtxBindingCount];
+ bufferSize = pPipeline->vtxBindingCount * sizeof(VK_VERTEX_INPUT_BINDING_DESCRIPTION);
+ memcpy((void*)pPipeline->pVertexBindingDescriptions, ((VK_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav)->pVertexAttributeDescriptions, bufferSize);
}
pPipeline->vtxAttributeCount = pVICI->attributeCount;
if (pPipeline->vtxAttributeCount) {
- pPipeline->pVertexAttributeDescriptions = new XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION[pPipeline->vtxAttributeCount];
- bufferSize = pPipeline->vtxAttributeCount * sizeof(XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION);
- memcpy((void*)pPipeline->pVertexAttributeDescriptions, ((XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav)->pVertexAttributeDescriptions, bufferSize);
+ pPipeline->pVertexAttributeDescriptions = new VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION[pPipeline->vtxAttributeCount];
+ bufferSize = pPipeline->vtxAttributeCount * sizeof(VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION);
+ memcpy((void*)pPipeline->pVertexAttributeDescriptions, ((VK_PIPELINE_VERTEX_INPUT_CREATE_INFO*)pTrav)->pVertexAttributeDescriptions, bufferSize);
}
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_IA_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->iaStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->iaStateCI;
- memcpy((void*)&pPipeline->iaStateCI, pTrav, sizeof(XGL_PIPELINE_IA_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->iaStateCI, pTrav, sizeof(VK_PIPELINE_IA_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_TESS_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->tessStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->tessStateCI;
- memcpy((void*)&pPipeline->tessStateCI, pTrav, sizeof(XGL_PIPELINE_TESS_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->tessStateCI, pTrav, sizeof(VK_PIPELINE_TESS_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_VP_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->vpStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->vpStateCI;
- memcpy((void*)&pPipeline->vpStateCI, pTrav, sizeof(XGL_PIPELINE_VP_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->vpStateCI, pTrav, sizeof(VK_PIPELINE_VP_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_RS_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->rsStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->rsStateCI;
- memcpy((void*)&pPipeline->rsStateCI, pTrav, sizeof(XGL_PIPELINE_RS_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->rsStateCI, pTrav, sizeof(VK_PIPELINE_RS_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->msStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->msStateCI;
- memcpy((void*)&pPipeline->msStateCI, pTrav, sizeof(XGL_PIPELINE_MS_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->msStateCI, pTrav, sizeof(VK_PIPELINE_MS_STATE_CREATE_INFO));
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_CB_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->cbStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->cbStateCI;
- memcpy((void*)&pPipeline->cbStateCI, pTrav, sizeof(XGL_PIPELINE_CB_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->cbStateCI, pTrav, sizeof(VK_PIPELINE_CB_STATE_CREATE_INFO));
// Copy embedded ptrs
- pCBCI = (XGL_PIPELINE_CB_STATE_CREATE_INFO*)pTrav;
+ pCBCI = (VK_PIPELINE_CB_STATE_CREATE_INFO*)pTrav;
pPipeline->attachmentCount = pCBCI->attachmentCount;
if (pPipeline->attachmentCount) {
- pPipeline->pAttachments = new XGL_PIPELINE_CB_ATTACHMENT_STATE[pPipeline->attachmentCount];
- bufferSize = pPipeline->attachmentCount * sizeof(XGL_PIPELINE_CB_ATTACHMENT_STATE);
- memcpy((void*)pPipeline->pAttachments, ((XGL_PIPELINE_CB_STATE_CREATE_INFO*)pTrav)->pAttachments, bufferSize);
+ pPipeline->pAttachments = new VK_PIPELINE_CB_ATTACHMENT_STATE[pPipeline->attachmentCount];
+ bufferSize = pPipeline->attachmentCount * sizeof(VK_PIPELINE_CB_ATTACHMENT_STATE);
+ memcpy((void*)pPipeline->pAttachments, ((VK_PIPELINE_CB_STATE_CREATE_INFO*)pTrav)->pAttachments, bufferSize);
}
break;
- case XGL_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO:
+ case VK_STRUCTURE_TYPE_PIPELINE_DS_STATE_CREATE_INFO:
pPrev->pNext = &pPipeline->dsStateCI;
pPrev = (GENERIC_HEADER*)&pPipeline->dsStateCI;
- memcpy((void*)&pPipeline->dsStateCI, pTrav, sizeof(XGL_PIPELINE_DS_STATE_CREATE_INFO));
+ memcpy((void*)&pPipeline->dsStateCI, pTrav, sizeof(VK_PIPELINE_DS_STATE_CREATE_INFO));
break;
default:
assert(0);
// Free the Pipeline nodes
static void freePipelines()
{
- for (unordered_map<XGL_PIPELINE, PIPELINE_NODE*>::iterator ii=pipelineMap.begin(); ii!=pipelineMap.end(); ++ii) {
+ for (unordered_map<VK_PIPELINE, PIPELINE_NODE*>::iterator ii=pipelineMap.begin(); ii!=pipelineMap.end(); ++ii) {
if ((*ii).second->pVertexBindingDescriptions) {
delete[] (*ii).second->pVertexBindingDescriptions;
}
}
}
// For given pipeline, return number of MSAA samples, or one if MSAA disabled
-static uint32_t getNumSamples(const XGL_PIPELINE pipeline)
+static uint32_t getNumSamples(const VK_PIPELINE pipeline)
{
PIPELINE_NODE* pPipe = pipelineMap[pipeline];
- if (XGL_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO == pPipe->msStateCI.sType) {
+ if (VK_STRUCTURE_TYPE_PIPELINE_MS_STATE_CREATE_INFO == pPipe->msStateCI.sType) {
if (pPipe->msStateCI.multisampleEnable)
return pPipe->msStateCI.samples;
}
return 1;
}
// Validate state related to the PSO
-static void validatePipelineState(const GLOBAL_CB_NODE* pCB, const XGL_PIPELINE_BIND_POINT pipelineBindPoint, const XGL_PIPELINE pipeline)
+static void validatePipelineState(const GLOBAL_CB_NODE* pCB, const VK_PIPELINE_BIND_POINT pipelineBindPoint, const VK_PIPELINE pipeline)
{
- if (XGL_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) {
+ if (VK_PIPELINE_BIND_POINT_GRAPHICS == pipelineBindPoint) {
// Verify that any MSAA request in PSO matches sample# in bound FB
uint32_t psoNumSamples = getNumSamples(pipeline);
if (pCB->activeRenderPass) {
- XGL_RENDER_PASS_CREATE_INFO* pRPCI = renderPassMap[pCB->activeRenderPass];
- XGL_FRAMEBUFFER_CREATE_INFO* pFBCI = frameBufferMap[pCB->framebuffer];
+ VK_RENDER_PASS_CREATE_INFO* pRPCI = renderPassMap[pCB->activeRenderPass];
+ VK_FRAMEBUFFER_CREATE_INFO* pFBCI = frameBufferMap[pCB->framebuffer];
if (psoNumSamples != pFBCI->sampleCount) {
char str[1024];
sprintf(str, "Num samples mismatche! Binding PSO (%p) with %u samples while current RenderPass (%p) uses FB (%p) with %u samples!", (void*)pipeline, psoNumSamples, (void*)pCB->activeRenderPass, (void*)pCB->framebuffer, pFBCI->sampleCount);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pipeline, 0, DRAWSTATE_NUM_SAMPLES_MISMATCH, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pipeline, 0, DRAWSTATE_NUM_SAMPLES_MISMATCH, "DS", str);
}
} else {
// TODO : I believe it's an error if we reach this point and don't have an activeRenderPass
// Block of code at start here specifically for managing/tracking DSs
// Return Pool node ptr for specified pool or else NULL
-static POOL_NODE* getPoolNode(XGL_DESCRIPTOR_POOL pool)
+static POOL_NODE* getPoolNode(VK_DESCRIPTOR_POOL pool)
{
loader_platform_thread_lock_mutex(&globalLock);
if (poolMap.find(pool) == poolMap.end()) {
return poolMap[pool];
}
// Return Set node ptr for specified set or else NULL
-static SET_NODE* getSetNode(XGL_DESCRIPTOR_SET set)
+static SET_NODE* getSetNode(VK_DESCRIPTOR_SET set)
{
loader_platform_thread_lock_mutex(&globalLock);
if (setMap.find(set) == setMap.end()) {
return setMap[set];
}
-// Return XGL_TRUE if DS Exists and is within an xglBeginDescriptorPoolUpdate() call sequence, otherwise XGL_FALSE
-static bool32_t dsUpdateActive(XGL_DESCRIPTOR_SET ds)
+// Return VK_TRUE if DS Exists and is within an vkBeginDescriptorPoolUpdate() call sequence, otherwise VK_FALSE
+static bool32_t dsUpdateActive(VK_DESCRIPTOR_SET ds)
{
// Note, both "get" functions use global mutex so this guy does not
SET_NODE* pTrav = getSetNode(ds);
return pPool->updateActive;
}
}
- return XGL_FALSE;
+ return VK_FALSE;
}
-static LAYOUT_NODE* getLayoutNode(const XGL_DESCRIPTOR_SET_LAYOUT layout) {
+static LAYOUT_NODE* getLayoutNode(const VK_DESCRIPTOR_SET_LAYOUT layout) {
loader_platform_thread_lock_mutex(&globalLock);
if (layoutMap.find(layout) == layoutMap.end()) {
loader_platform_thread_unlock_mutex(&globalLock);
{
switch (pUpdateStruct->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- return ((XGL_UPDATE_SAMPLERS*)pUpdateStruct)->binding;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- return ((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->binding;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- return ((XGL_UPDATE_IMAGES*)pUpdateStruct)->binding;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- return ((XGL_UPDATE_BUFFERS*)pUpdateStruct)->binding;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
- return ((XGL_UPDATE_AS_COPY*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ return ((VK_UPDATE_SAMPLERS*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ return ((VK_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ return ((VK_UPDATE_IMAGES*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ return ((VK_UPDATE_BUFFERS*)pUpdateStruct)->binding;
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ return ((VK_UPDATE_AS_COPY*)pUpdateStruct)->binding;
default:
// TODO : Flag specific error for this case
assert(0);
{
switch (pUpdateStruct->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- return (((XGL_UPDATE_SAMPLERS*)pUpdateStruct)->arrayIndex);
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- return (((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->arrayIndex);
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- return (((XGL_UPDATE_IMAGES*)pUpdateStruct)->arrayIndex);
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- return (((XGL_UPDATE_BUFFERS*)pUpdateStruct)->arrayIndex);
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ return (((VK_UPDATE_SAMPLERS*)pUpdateStruct)->arrayIndex);
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ return (((VK_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->arrayIndex);
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ return (((VK_UPDATE_IMAGES*)pUpdateStruct)->arrayIndex);
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ return (((VK_UPDATE_BUFFERS*)pUpdateStruct)->arrayIndex);
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
// TODO : Need to understand this case better and make sure code is correct
- return (((XGL_UPDATE_AS_COPY*)pUpdateStruct)->arrayElement);
+ return (((VK_UPDATE_AS_COPY*)pUpdateStruct)->arrayElement);
default:
// TODO : Flag specific error for this case
assert(0);
{
switch (pUpdateStruct->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- return (((XGL_UPDATE_SAMPLERS*)pUpdateStruct)->count);
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- return (((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->count);
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- return (((XGL_UPDATE_IMAGES*)pUpdateStruct)->count);
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- return (((XGL_UPDATE_BUFFERS*)pUpdateStruct)->count);
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ return (((VK_UPDATE_SAMPLERS*)pUpdateStruct)->count);
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ return (((VK_UPDATE_SAMPLER_TEXTURES*)pUpdateStruct)->count);
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ return (((VK_UPDATE_IMAGES*)pUpdateStruct)->count);
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ return (((VK_UPDATE_BUFFERS*)pUpdateStruct)->count);
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
// TODO : Need to understand this case better and make sure code is correct
- return (((XGL_UPDATE_AS_COPY*)pUpdateStruct)->count);
+ return (((VK_UPDATE_AS_COPY*)pUpdateStruct)->count);
default:
// TODO : Flag specific error for this case
assert(0);
static bool32_t validateUpdateType(const LAYOUT_NODE* pLayout, const GENERIC_HEADER* pUpdateStruct)
{
// First get actual type of update
- XGL_DESCRIPTOR_TYPE actualType;
+ VK_DESCRIPTOR_TYPE actualType;
uint32_t i = 0;
switch (pUpdateStruct->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- actualType = XGL_DESCRIPTOR_TYPE_SAMPLER;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ actualType = VK_DESCRIPTOR_TYPE_SAMPLER;
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- actualType = XGL_DESCRIPTOR_TYPE_SAMPLER_TEXTURE;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ actualType = VK_DESCRIPTOR_TYPE_SAMPLER_TEXTURE;
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- actualType = ((XGL_UPDATE_IMAGES*)pUpdateStruct)->descriptorType;
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ actualType = ((VK_UPDATE_IMAGES*)pUpdateStruct)->descriptorType;
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- actualType = ((XGL_UPDATE_BUFFERS*)pUpdateStruct)->descriptorType;
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ actualType = ((VK_UPDATE_BUFFERS*)pUpdateStruct)->descriptorType;
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
- actualType = ((XGL_UPDATE_AS_COPY*)pUpdateStruct)->descriptorType;
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ actualType = ((VK_UPDATE_AS_COPY*)pUpdateStruct)->descriptorType;
break;
default:
// TODO : Flag specific error for this case
size_t base_array_size = 0;
size_t total_array_size = 0;
size_t baseBuffAddr = 0;
- XGL_UPDATE_BUFFERS* pUBCI;
- XGL_UPDATE_IMAGES* pUICI;
- XGL_IMAGE_VIEW_ATTACH_INFO** ppLocalImageViews = NULL;
- XGL_BUFFER_VIEW_ATTACH_INFO** ppLocalBufferViews = NULL;
+ VK_UPDATE_BUFFERS* pUBCI;
+ VK_UPDATE_IMAGES* pUICI;
+ VK_IMAGE_VIEW_ATTACH_INFO** ppLocalImageViews = NULL;
+ VK_BUFFER_VIEW_ATTACH_INFO** ppLocalBufferViews = NULL;
char str[1024];
switch (pUpdate->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_SAMPLERS));
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_SAMPLERS));
#if ALLOC_DEBUG
printf("Alloc10 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_SAMPLERS));
- array_size = sizeof(XGL_SAMPLER) * ((XGL_UPDATE_SAMPLERS*)pNewNode)->count;
- ((XGL_UPDATE_SAMPLERS*)pNewNode)->pSamplers = (XGL_SAMPLER*)malloc(array_size);
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_SAMPLERS));
+ array_size = sizeof(VK_SAMPLER) * ((VK_UPDATE_SAMPLERS*)pNewNode)->count;
+ ((VK_UPDATE_SAMPLERS*)pNewNode)->pSamplers = (VK_SAMPLER*)malloc(array_size);
#if ALLOC_DEBUG
- printf("Alloc11 #%lu pNewNode->pSamplers addr(%p)\n", ++g_alloc_count, (void*)((XGL_UPDATE_SAMPLERS*)pNewNode)->pSamplers);
+ printf("Alloc11 #%lu pNewNode->pSamplers addr(%p)\n", ++g_alloc_count, (void*)((VK_UPDATE_SAMPLERS*)pNewNode)->pSamplers);
#endif
- memcpy((XGL_SAMPLER*)((XGL_UPDATE_SAMPLERS*)pNewNode)->pSamplers, ((XGL_UPDATE_SAMPLERS*)pUpdate)->pSamplers, array_size);
+ memcpy((VK_SAMPLER*)((VK_UPDATE_SAMPLERS*)pNewNode)->pSamplers, ((VK_UPDATE_SAMPLERS*)pUpdate)->pSamplers, array_size);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_SAMPLER_TEXTURES));
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_SAMPLER_TEXTURES));
#if ALLOC_DEBUG
printf("Alloc12 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_SAMPLER_TEXTURES));
- array_size = sizeof(XGL_SAMPLER_IMAGE_VIEW_INFO) * ((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->count;
- ((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews = (XGL_SAMPLER_IMAGE_VIEW_INFO*)malloc(array_size);
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_SAMPLER_TEXTURES));
+ array_size = sizeof(VK_SAMPLER_IMAGE_VIEW_INFO) * ((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->count;
+ ((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews = (VK_SAMPLER_IMAGE_VIEW_INFO*)malloc(array_size);
#if ALLOC_DEBUG
- printf("Alloc13 #%lu pNewNode->pSamplerImageViews addr(%p)\n", ++g_alloc_count, (void*)((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews);
+ printf("Alloc13 #%lu pNewNode->pSamplerImageViews addr(%p)\n", ++g_alloc_count, (void*)((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews);
#endif
- for (uint32_t i = 0; i < ((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->count; i++) {
- memcpy((XGL_SAMPLER_IMAGE_VIEW_INFO*)&((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews[i], &((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdate)->pSamplerImageViews[i], sizeof(XGL_SAMPLER_IMAGE_VIEW_INFO));
- ((XGL_SAMPLER_IMAGE_VIEW_INFO*)((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews)[i].pImageView = (XGL_IMAGE_VIEW_ATTACH_INFO*)malloc(sizeof(XGL_IMAGE_VIEW_ATTACH_INFO));
+ for (uint32_t i = 0; i < ((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->count; i++) {
+ memcpy((VK_SAMPLER_IMAGE_VIEW_INFO*)&((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews[i], &((VK_UPDATE_SAMPLER_TEXTURES*)pUpdate)->pSamplerImageViews[i], sizeof(VK_SAMPLER_IMAGE_VIEW_INFO));
+ ((VK_SAMPLER_IMAGE_VIEW_INFO*)((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews)[i].pImageView = (VK_IMAGE_VIEW_ATTACH_INFO*)malloc(sizeof(VK_IMAGE_VIEW_ATTACH_INFO));
#if ALLOC_DEBUG
- printf("Alloc14 #%lu pSamplerImageViews)[%u].pImageView addr(%p)\n", ++g_alloc_count, i, (void*)((XGL_SAMPLER_IMAGE_VIEW_INFO*)((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews)[i].pImageView);
+ printf("Alloc14 #%lu pSamplerImageViews)[%u].pImageView addr(%p)\n", ++g_alloc_count, i, (void*)((VK_SAMPLER_IMAGE_VIEW_INFO*)((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews)[i].pImageView);
#endif
- memcpy((XGL_IMAGE_VIEW_ATTACH_INFO*)((XGL_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews[i].pImageView, ((XGL_UPDATE_SAMPLER_TEXTURES*)pUpdate)->pSamplerImageViews[i].pImageView, sizeof(XGL_IMAGE_VIEW_ATTACH_INFO));
+ memcpy((VK_IMAGE_VIEW_ATTACH_INFO*)((VK_UPDATE_SAMPLER_TEXTURES*)pNewNode)->pSamplerImageViews[i].pImageView, ((VK_UPDATE_SAMPLER_TEXTURES*)pUpdate)->pSamplerImageViews[i].pImageView, sizeof(VK_IMAGE_VIEW_ATTACH_INFO));
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- pUICI = (XGL_UPDATE_IMAGES*)pUpdate;
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_IMAGES));
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ pUICI = (VK_UPDATE_IMAGES*)pUpdate;
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_IMAGES));
#if ALLOC_DEBUG
printf("Alloc15 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_IMAGES));
- total_array_size = (sizeof(XGL_IMAGE_VIEW_ATTACH_INFO) * ((XGL_UPDATE_IMAGES*)pNewNode)->count);
- ppLocalImageViews = (XGL_IMAGE_VIEW_ATTACH_INFO**)&(((XGL_UPDATE_IMAGES*)pNewNode)->pImageViews);
- *ppLocalImageViews = (XGL_IMAGE_VIEW_ATTACH_INFO*)malloc(total_array_size);
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_IMAGES));
+ total_array_size = (sizeof(VK_IMAGE_VIEW_ATTACH_INFO) * ((VK_UPDATE_IMAGES*)pNewNode)->count);
+ ppLocalImageViews = (VK_IMAGE_VIEW_ATTACH_INFO**)&(((VK_UPDATE_IMAGES*)pNewNode)->pImageViews);
+ *ppLocalImageViews = (VK_IMAGE_VIEW_ATTACH_INFO*)malloc(total_array_size);
#if ALLOC_DEBUG
printf("Alloc16 #%lu *pppLocalImageViews addr(%p)\n", ++g_alloc_count, (void*)*ppLocalImageViews);
#endif
memcpy((void*)*ppLocalImageViews, pUICI->pImageViews, total_array_size);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- pUBCI = (XGL_UPDATE_BUFFERS*)pUpdate;
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_BUFFERS));
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ pUBCI = (VK_UPDATE_BUFFERS*)pUpdate;
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_BUFFERS));
#if ALLOC_DEBUG
printf("Alloc17 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_BUFFERS));
- total_array_size = (sizeof(XGL_BUFFER_VIEW_ATTACH_INFO) * pUBCI->count);
- ppLocalBufferViews = (XGL_BUFFER_VIEW_ATTACH_INFO**)&(((XGL_UPDATE_BUFFERS*)pNewNode)->pBufferViews);
- *ppLocalBufferViews = (XGL_BUFFER_VIEW_ATTACH_INFO*)malloc(total_array_size);
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_BUFFERS));
+ total_array_size = (sizeof(VK_BUFFER_VIEW_ATTACH_INFO) * pUBCI->count);
+ ppLocalBufferViews = (VK_BUFFER_VIEW_ATTACH_INFO**)&(((VK_UPDATE_BUFFERS*)pNewNode)->pBufferViews);
+ *ppLocalBufferViews = (VK_BUFFER_VIEW_ATTACH_INFO*)malloc(total_array_size);
#if ALLOC_DEBUG
printf("Alloc18 #%lu *pppLocalBufferViews addr(%p)\n", ++g_alloc_count, (void*)*ppLocalBufferViews);
#endif
memcpy((void*)*ppLocalBufferViews, pUBCI->pBufferViews, total_array_size);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
- pNewNode = (GENERIC_HEADER*)malloc(sizeof(XGL_UPDATE_AS_COPY));
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ pNewNode = (GENERIC_HEADER*)malloc(sizeof(VK_UPDATE_AS_COPY));
#if ALLOC_DEBUG
printf("Alloc19 #%lu pNewNode addr(%p)\n", ++g_alloc_count, (void*)pNewNode);
#endif
- memcpy(pNewNode, pUpdate, sizeof(XGL_UPDATE_AS_COPY));
+ memcpy(pNewNode, pUpdate, sizeof(VK_UPDATE_AS_COPY));
break;
default:
- sprintf(str, "Unexpected UPDATE struct of type %s (value %u) in xglUpdateDescriptors() struct tree", string_XGL_STRUCTURE_TYPE(pUpdate->sType), pUpdate->sType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS", str);
+ sprintf(str, "Unexpected UPDATE struct of type %s (value %u) in vkUpdateDescriptors() struct tree", string_VK_STRUCTURE_TYPE(pUpdate->sType), pUpdate->sType);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INVALID_UPDATE_STRUCT, "DS", str);
return NULL;
}
// Make sure that pNext for the end of shadow copy is NULL
return pNewNode;
}
// For given ds, update its mapping based on ppUpdateArray
-static void dsUpdate(XGL_DESCRIPTOR_SET ds, uint32_t updateCount, const void** ppUpdateArray)
+static void dsUpdate(VK_DESCRIPTOR_SET ds, uint32_t updateCount, const void** ppUpdateArray)
{
SET_NODE* pSet = getSetNode(ds);
loader_platform_thread_lock_mutex(&globalLock);
g_lastBoundDescriptorSet = pSet->set;
LAYOUT_NODE* pLayout = NULL;
- XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pLayoutCI = NULL;
+ VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pLayoutCI = NULL;
// TODO : If pCIList is NULL, flag error
// Perform all updates
for (uint32_t i = 0; i < updateCount; i++) {
// Make sure that binding is within bounds
if (pLayout->createInfo.count < getUpdateBinding(pUpdate)) {
char str[1024];
- sprintf(str, "Descriptor Set %p does not have binding to match update binding %u for update type %s!", ds, getUpdateBinding(pUpdate), string_XGL_STRUCTURE_TYPE(pUpdate->sType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_INVALID_UPDATE_INDEX, "DS", str);
+ sprintf(str, "Descriptor Set %p does not have binding to match update binding %u for update type %s!", ds, getUpdateBinding(pUpdate), string_VK_STRUCTURE_TYPE(pUpdate->sType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_INVALID_UPDATE_INDEX, "DS", str);
}
else {
// Next verify that update falls within size of given binding
if (getBindingEndIndex(pLayout, getUpdateBinding(pUpdate)) < getUpdateEndIndex(pLayout, pUpdate)) {
char str[48*1024]; // TODO : Keep count of layout CI structs and size this string dynamically based on that count
pLayoutCI = &pLayout->createInfo;
- string DSstr = xgl_print_xgl_descriptor_set_layout_create_info(pLayoutCI, "{DS} ");
- sprintf(str, "Descriptor update type of %s is out of bounds for matching binding %u in Layout w/ CI:\n%s!", string_XGL_STRUCTURE_TYPE(pUpdate->sType), getUpdateBinding(pUpdate), DSstr.c_str());
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS", str);
+ string DSstr = vk_print_vk_descriptor_set_layout_create_info(pLayoutCI, "{DS} ");
+ sprintf(str, "Descriptor update type of %s is out of bounds for matching binding %u in Layout w/ CI:\n%s!", string_VK_STRUCTURE_TYPE(pUpdate->sType), getUpdateBinding(pUpdate), DSstr.c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_DESCRIPTOR_UPDATE_OUT_OF_BOUNDS, "DS", str);
}
else { // TODO : should we skip update on a type mismatch or force it?
// Layout bindings match w/ update ok, now verify that update is of the right type
if (!validateUpdateType(pLayout, pUpdate)) {
char str[1024];
- sprintf(str, "Descriptor update type of %s does not match overlapping binding type!", string_XGL_STRUCTURE_TYPE(pUpdate->sType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS", str);
+ sprintf(str, "Descriptor update type of %s does not match overlapping binding type!", string_VK_STRUCTURE_TYPE(pUpdate->sType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, "DS", str);
}
else {
// Save the update info
GENERIC_HEADER* pNewNode = shadowUpdateNode(pUpdate);
if (NULL == pNewNode) {
char str[1024];
- sprintf(str, "Out of memory while attempting to allocate UPDATE struct in xglUpdateDescriptors()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ sprintf(str, "Out of memory while attempting to allocate UPDATE struct in vkUpdateDescriptors()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, ds, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
else {
// Insert shadow node into LL of updates for this set
pFreeUpdate = pShadowUpdate;
pShadowUpdate = (GENERIC_HEADER*)pShadowUpdate->pNext;
uint32_t index = 0;
- XGL_UPDATE_SAMPLERS* pUS = NULL;
- XGL_UPDATE_SAMPLER_TEXTURES* pUST = NULL;
- XGL_UPDATE_IMAGES* pUI = NULL;
- XGL_UPDATE_BUFFERS* pUB = NULL;
+ VK_UPDATE_SAMPLERS* pUS = NULL;
+ VK_UPDATE_SAMPLER_TEXTURES* pUST = NULL;
+ VK_UPDATE_IMAGES* pUI = NULL;
+ VK_UPDATE_BUFFERS* pUB = NULL;
void** ppToFree = NULL;
switch (pFreeUpdate->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- pUS = (XGL_UPDATE_SAMPLERS*)pFreeUpdate;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ pUS = (VK_UPDATE_SAMPLERS*)pFreeUpdate;
if (pUS->pSamplers) {
ppToFree = (void**)&pUS->pSamplers;
#if ALLOC_DEBUG
free(*ppToFree);
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- pUST = (XGL_UPDATE_SAMPLER_TEXTURES*)pFreeUpdate;
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ pUST = (VK_UPDATE_SAMPLER_TEXTURES*)pFreeUpdate;
for (index = 0; index < pUST->count; index++) {
if (pUST->pSamplerImageViews[index].pImageView) {
ppToFree = (void**)&pUST->pSamplerImageViews[index].pImageView;
#endif
free(*ppToFree);
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- pUI = (XGL_UPDATE_IMAGES*)pFreeUpdate;
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ pUI = (VK_UPDATE_IMAGES*)pFreeUpdate;
if (pUI->pImageViews) {
ppToFree = (void**)&pUI->pImageViews;
#if ALLOC_DEBUG
free(*ppToFree);
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- pUB = (XGL_UPDATE_BUFFERS*)pFreeUpdate;
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ pUB = (VK_UPDATE_BUFFERS*)pFreeUpdate;
if (pUB->pBufferViews) {
ppToFree = (void**)&pUB->pBufferViews;
#if ALLOC_DEBUG
free(*ppToFree);
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
break;
default:
assert(0);
// NOTE : Calls to this function should be wrapped in mutex
static void freePools()
{
- for (unordered_map<XGL_DESCRIPTOR_POOL, POOL_NODE*>::iterator ii=poolMap.begin(); ii!=poolMap.end(); ++ii) {
+ for (unordered_map<VK_DESCRIPTOR_POOL, POOL_NODE*>::iterator ii=poolMap.begin(); ii!=poolMap.end(); ++ii) {
SET_NODE* pSet = (*ii).second->pSets;
SET_NODE* pFreeSet = pSet;
while (pSet) {
// NOTE : Calls to this function should be wrapped in mutex
static void freeLayouts()
{
- for (unordered_map<XGL_DESCRIPTOR_SET_LAYOUT, LAYOUT_NODE*>::iterator ii=layoutMap.begin(); ii!=layoutMap.end(); ++ii) {
+ for (unordered_map<VK_DESCRIPTOR_SET_LAYOUT, LAYOUT_NODE*>::iterator ii=layoutMap.begin(); ii!=layoutMap.end(); ++ii) {
LAYOUT_NODE* pLayout = (*ii).second;
if (pLayout->pTypes) {
delete pLayout->pTypes;
}
// Currently clearing a set is removing all previous updates to that set
// TODO : Validate if this is correct clearing behavior
-static void clearDescriptorSet(XGL_DESCRIPTOR_SET set)
+static void clearDescriptorSet(VK_DESCRIPTOR_SET set)
{
SET_NODE* pSet = getSetNode(set);
if (!pSet) {
}
}
-static void clearDescriptorPool(XGL_DESCRIPTOR_POOL pool)
+static void clearDescriptorPool(VK_DESCRIPTOR_POOL pool)
{
POOL_NODE* pPool = getPoolNode(pool);
if (!pPool) {
char str[1024];
- sprintf(str, "Unable to find pool node for pool %p specified in xglClearDescriptorPool() call", (void*)pool);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pool, 0, DRAWSTATE_INVALID_POOL, "DS", str);
+ sprintf(str, "Unable to find pool node for pool %p specified in vkClearDescriptorPool() call", (void*)pool);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pool, 0, DRAWSTATE_INVALID_POOL, "DS", str);
}
else
{
}
}
// Code here to manage the Cmd buffer LL
-static GLOBAL_CB_NODE* getCBNode(XGL_CMD_BUFFER cb)
+static GLOBAL_CB_NODE* getCBNode(VK_CMD_BUFFER cb)
{
loader_platform_thread_lock_mutex(&globalLock);
if (cmdBufferMap.find(cb) == cmdBufferMap.end()) {
// NOTE : Calls to this function should be wrapped in mutex
static void freeCmdBuffers()
{
- for (unordered_map<XGL_CMD_BUFFER, GLOBAL_CB_NODE*>::iterator ii=cmdBufferMap.begin(); ii!=cmdBufferMap.end(); ++ii) {
+ for (unordered_map<VK_CMD_BUFFER, GLOBAL_CB_NODE*>::iterator ii=cmdBufferMap.begin(); ii!=cmdBufferMap.end(); ++ii) {
while (!(*ii).second->pCmds.empty()) {
delete (*ii).second->pCmds.back();
(*ii).second->pCmds.pop_back();
else {
char str[1024];
sprintf(str, "Out of memory while attempting to allocate new CMD_NODE for cmdBuffer %p", (void*)pCB->cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pCB->cmdBuffer, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pCB->cmdBuffer, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
}
-static void resetCB(const XGL_CMD_BUFFER cb)
+static void resetCB(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
pCB->pCmds.pop_back();
}
// Reset CB state
- XGL_FLAGS saveFlags = pCB->flags;
+ VK_FLAGS saveFlags = pCB->flags;
uint32_t saveQueueNodeIndex = pCB->queueNodeIndex;
memset(pCB, 0, sizeof(GLOBAL_CB_NODE));
pCB->cmdBuffer = cb;
}
// Set the last bound dynamic state of given type
// TODO : Need to track this per cmdBuffer and correlate cmdBuffer for Draw w/ last bound for that cmdBuffer?
-static void setLastBoundDynamicState(const XGL_CMD_BUFFER cmdBuffer, const XGL_DYNAMIC_STATE_OBJECT state, const XGL_STATE_BIND_POINT sType)
+static void setLastBoundDynamicState(const VK_CMD_BUFFER cmdBuffer, const VK_DYNAMIC_STATE_OBJECT state, const VK_STATE_BIND_POINT sType)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
if (dynamicStateMap.find(state) == dynamicStateMap.end()) {
char str[1024];
sprintf(str, "Unable to find dynamic state object %p, was it ever created?", (void*)state);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, state, 0, DRAWSTATE_INVALID_DYNAMIC_STATE_OBJECT, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, state, 0, DRAWSTATE_INVALID_DYNAMIC_STATE_OBJECT, "DS", str);
}
else {
pCB->lastBoundDynamicState[sType] = dynamicStateMap[state];
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
}
// Print the last bound Gfx Pipeline
-static void printPipeline(const XGL_CMD_BUFFER cb)
+static void printPipeline(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
// nothing to print
}
else {
- string pipeStr = xgl_print_xgl_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "{DS}").c_str();
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", pipeStr.c_str());
+ string pipeStr = vk_print_vk_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "{DS}").c_str();
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", pipeStr.c_str());
}
}
}
// Common Dot dumping code
-static void dsCoreDumpDot(const XGL_DESCRIPTOR_SET ds, FILE* pOutFile)
+static void dsCoreDumpDot(const VK_DESCRIPTOR_SET ds, FILE* pOutFile)
{
SET_NODE* pSet = getSetNode(ds);
if (pSet) {
char tmp_str[4*1024];
fprintf(pOutFile, "subgraph cluster_DescriptorPool\n{\nlabel=\"Descriptor Pool\"\n");
sprintf(tmp_str, "Pool (%p)", pPool->pool);
- char* pGVstr = xgl_gv_print_xgl_descriptor_pool_create_info(&pPool->createInfo, tmp_str);
+ char* pGVstr = vk_gv_print_vk_descriptor_pool_create_info(&pPool->createInfo, tmp_str);
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
fprintf(pOutFile, "subgraph cluster_DescriptorSet\n{\nlabel=\"Descriptor Set (%p)\"\n", pSet->set);
uint32_t layout_index = 0;
++layout_index;
sprintf(tmp_str, "LAYOUT%u", layout_index);
- pGVstr = xgl_gv_print_xgl_descriptor_set_layout_create_info(&pLayout->createInfo, tmp_str);
+ pGVstr = vk_gv_print_vk_descriptor_set_layout_create_info(&pLayout->createInfo, tmp_str);
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
if (pSet->pUpdateStructs) {
uint32_t i = 0;
for (i=0; i < pSet->descriptorCount; i++) {
if (pSet->ppDescriptors[i]) {
- fprintf(pOutFile, "<TR><TD PORT=\"slot%u\">slot%u</TD><TD>%s</TD></TR>", i, i, string_XGL_STRUCTURE_TYPE(pSet->ppDescriptors[i]->sType));
+ fprintf(pOutFile, "<TR><TD PORT=\"slot%u\">slot%u</TD><TD>%s</TD></TR>", i, i, string_VK_STRUCTURE_TYPE(pSet->ppDescriptors[i]->sType));
}
}
#define NUM_COLORS 7
uint32_t colorIdx = 0;
fprintf(pOutFile, "</TABLE>>\n];\n");
// Now add the views that are mapped to active descriptors
- XGL_UPDATE_SAMPLERS* pUS = NULL;
- XGL_UPDATE_SAMPLER_TEXTURES* pUST = NULL;
- XGL_UPDATE_IMAGES* pUI = NULL;
- XGL_UPDATE_BUFFERS* pUB = NULL;
- XGL_UPDATE_AS_COPY* pUAC = NULL;
- XGL_SAMPLER_CREATE_INFO* pSCI = NULL;
- XGL_IMAGE_VIEW_CREATE_INFO* pIVCI = NULL;
- XGL_BUFFER_VIEW_CREATE_INFO* pBVCI = NULL;
+ VK_UPDATE_SAMPLERS* pUS = NULL;
+ VK_UPDATE_SAMPLER_TEXTURES* pUST = NULL;
+ VK_UPDATE_IMAGES* pUI = NULL;
+ VK_UPDATE_BUFFERS* pUB = NULL;
+ VK_UPDATE_AS_COPY* pUAC = NULL;
+ VK_SAMPLER_CREATE_INFO* pSCI = NULL;
+ VK_IMAGE_VIEW_CREATE_INFO* pIVCI = NULL;
+ VK_BUFFER_VIEW_CREATE_INFO* pBVCI = NULL;
void** ppNextPtr = NULL;
void* pSaveNext = NULL;
for (i=0; i < pSet->descriptorCount; i++) {
if (pSet->ppDescriptors[i]) {
switch (pSet->ppDescriptors[i]->sType)
{
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLERS:
- pUS = (XGL_UPDATE_SAMPLERS*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLERS:
+ pUS = (VK_UPDATE_SAMPLERS*)pSet->ppDescriptors[i];
pSCI = getSamplerCreateInfo(pUS->pSamplers[i-pUS->arrayIndex]);
if (pSCI) {
sprintf(tmp_str, "SAMPLER%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_sampler_create_info(pSCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_sampler_create_info(pSCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
- pUST = (XGL_UPDATE_SAMPLER_TEXTURES*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_SAMPLER_TEXTURES:
+ pUST = (VK_UPDATE_SAMPLER_TEXTURES*)pSet->ppDescriptors[i];
pSCI = getSamplerCreateInfo(pUST->pSamplerImageViews[i-pUST->arrayIndex].sampler);
if (pSCI) {
sprintf(tmp_str, "SAMPLER%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_sampler_create_info(pSCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_sampler_create_info(pSCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
pIVCI = getImageViewCreateInfo(pUST->pSamplerImageViews[i-pUST->arrayIndex].pImageView->view);
if (pIVCI) {
sprintf(tmp_str, "IMAGE_VIEW%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_image_view_create_info(pIVCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_image_view_create_info(pIVCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_IMAGES:
- pUI = (XGL_UPDATE_IMAGES*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_IMAGES:
+ pUI = (VK_UPDATE_IMAGES*)pSet->ppDescriptors[i];
pIVCI = getImageViewCreateInfo(pUI->pImageViews[i-pUI->arrayIndex].view);
if (pIVCI) {
sprintf(tmp_str, "IMAGE_VIEW%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_image_view_create_info(pIVCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_image_view_create_info(pIVCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_BUFFERS:
- pUB = (XGL_UPDATE_BUFFERS*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_BUFFERS:
+ pUB = (VK_UPDATE_BUFFERS*)pSet->ppDescriptors[i];
pBVCI = getBufferViewCreateInfo(pUB->pBufferViews[i-pUB->arrayIndex].view);
if (pBVCI) {
sprintf(tmp_str, "BUFFER_VIEW%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_buffer_view_create_info(pBVCI, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_buffer_view_create_info(pBVCI, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
}
break;
- case XGL_STRUCTURE_TYPE_UPDATE_AS_COPY:
- pUAC = (XGL_UPDATE_AS_COPY*)pSet->ppDescriptors[i];
+ case VK_STRUCTURE_TYPE_UPDATE_AS_COPY:
+ pUAC = (VK_UPDATE_AS_COPY*)pSet->ppDescriptors[i];
// TODO : Need to validate this code
// Save off pNext and set to NULL while printing this struct, then restore it
ppNextPtr = (void**)&pUAC->pNext;
pSaveNext = *ppNextPtr;
*ppNextPtr = NULL;
sprintf(tmp_str, "UPDATE_AS_COPY%u", i);
- fprintf(pOutFile, "%s", xgl_gv_print_xgl_update_as_copy(pUAC, tmp_str));
+ fprintf(pOutFile, "%s", vk_gv_print_vk_update_as_copy(pUAC, tmp_str));
fprintf(pOutFile, "\"DESCRIPTORS\":slot%u -> \"%s\" [color=\"#%s\"];\n", i, tmp_str, edgeColors[colorIdx].c_str());
// Restore next ptr
*ppNextPtr = pSaveNext;
}
}
// Dump subgraph w/ DS info
-static void dsDumpDot(const XGL_CMD_BUFFER cb, FILE* pOutFile)
+static void dsDumpDot(const VK_CMD_BUFFER cb, FILE* pOutFile)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB && pCB->lastBoundDescriptorSet) {
fprintf(pOutFile, "digraph g {\ngraph [\nrankdir = \"TB\"\n];\nnode [\nfontsize = \"16\"\nshape = \"plaintext\"\n];\nedge [\n];\n");
fprintf(pOutFile, "subgraph cluster_dynamicState\n{\nlabel=\"Dynamic State\"\n");
char* pGVstr = NULL;
- for (uint32_t i = 0; i < XGL_NUM_STATE_BIND_POINT; i++) {
+ for (uint32_t i = 0; i < VK_NUM_STATE_BIND_POINT; i++) {
if (g_lastBoundDynamicState[i] && g_lastBoundDynamicState[i]->pCreateInfo) {
- pGVstr = dynamic_gv_display(g_lastBoundDynamicState[i]->pCreateInfo, string_XGL_STATE_BIND_POINT((XGL_STATE_BIND_POINT)i));
+ pGVstr = dynamic_gv_display(g_lastBoundDynamicState[i]->pCreateInfo, string_VK_STATE_BIND_POINT((VK_STATE_BIND_POINT)i));
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
}
}
fprintf(pOutFile, "}\n"); // close dynamicState subgraph
fprintf(pOutFile, "subgraph cluster_PipelineStateObject\n{\nlabel=\"Pipeline State Object\"\n");
- pGVstr = xgl_gv_print_xgl_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "PSO HEAD");
+ pGVstr = vk_gv_print_vk_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "PSO HEAD");
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
fprintf(pOutFile, "}\n");
}
}
// Dump a GraphViz dot file showing the pipeline for a given CB
-static void dumpDotFile(const XGL_CMD_BUFFER cb, string outFileName)
+static void dumpDotFile(const VK_CMD_BUFFER cb, string outFileName)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
fprintf(pOutFile, "digraph g {\ngraph [\nrankdir = \"TB\"\n];\nnode [\nfontsize = \"16\"\nshape = \"plaintext\"\n];\nedge [\n];\n");
fprintf(pOutFile, "subgraph cluster_dynamicState\n{\nlabel=\"Dynamic State\"\n");
char* pGVstr = NULL;
- for (uint32_t i = 0; i < XGL_NUM_STATE_BIND_POINT; i++) {
+ for (uint32_t i = 0; i < VK_NUM_STATE_BIND_POINT; i++) {
if (pCB->lastBoundDynamicState[i] && pCB->lastBoundDynamicState[i]->pCreateInfo) {
- pGVstr = dynamic_gv_display(pCB->lastBoundDynamicState[i]->pCreateInfo, string_XGL_STATE_BIND_POINT((XGL_STATE_BIND_POINT)i));
+ pGVstr = dynamic_gv_display(pCB->lastBoundDynamicState[i]->pCreateInfo, string_VK_STATE_BIND_POINT((VK_STATE_BIND_POINT)i));
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
}
}
fprintf(pOutFile, "}\n"); // close dynamicState subgraph
fprintf(pOutFile, "subgraph cluster_PipelineStateObject\n{\nlabel=\"Pipeline State Object\"\n");
- pGVstr = xgl_gv_print_xgl_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "PSO HEAD");
+ pGVstr = vk_gv_print_vk_graphics_pipeline_create_info(&pPipeTrav->graphicsPipelineCI, "PSO HEAD");
fprintf(pOutFile, "%s", pGVstr);
free(pGVstr);
fprintf(pOutFile, "}\n");
}
}
// Verify VB Buffer binding
-static void validateVBBinding(const XGL_CMD_BUFFER cb)
+static void validateVBBinding(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB && pCB->lastBoundPipeline) {
char str[1024];
if (!pPipeTrav) {
sprintf(str, "Can't find last bound Pipeline %p!", (void*)pCB->lastBoundPipeline);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NO_PIPELINE_BOUND, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NO_PIPELINE_BOUND, "DS", str);
}
else {
// Verify Vtx binding
if (pCB->lastVtxBinding >= pPipeTrav->vtxBindingCount) {
if (0 == pPipeTrav->vtxBindingCount) {
sprintf(str, "Vtx Buffer Index %u was bound, but no vtx buffers are attached to PSO.", pCB->lastVtxBinding);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", str);
}
else {
sprintf(str, "Vtx binding Index of %u exceeds PSO pVertexBindingDescriptions max array index of %u.", pCB->lastVtxBinding, (pPipeTrav->vtxBindingCount - 1));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, "DS", str);
}
}
else {
- string tmpStr = xgl_print_xgl_vertex_input_binding_description(&pPipeTrav->pVertexBindingDescriptions[pCB->lastVtxBinding], "{DS}INFO : ").c_str();
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmpStr.c_str());
+ string tmpStr = vk_print_vk_vertex_input_binding_description(&pPipeTrav->pVertexBindingDescriptions[pCB->lastVtxBinding], "{DS}INFO : ").c_str();
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmpStr.c_str());
}
}
}
}
}
// Print details of DS config to stdout
-static void printDSConfig(const XGL_CMD_BUFFER cb)
+static void printDSConfig(const VK_CMD_BUFFER cb)
{
char tmp_str[1024];
char ds_config_str[1024*256] = {0}; // TODO : Currently making this buffer HUGE w/o overrun protection. Need to be smarter, start smaller, and grow as needed.
POOL_NODE* pPool = getPoolNode(pSet->pool);
// Print out pool details
sprintf(tmp_str, "Details for pool %p.", (void*)pPool->pool);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
- string poolStr = xgl_print_xgl_descriptor_pool_create_info(&pPool->createInfo, " ");
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ string poolStr = vk_print_vk_descriptor_pool_create_info(&pPool->createInfo, " ");
sprintf(ds_config_str, "%s", poolStr.c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
// Print out set details
char prefix[10];
uint32_t index = 0;
sprintf(tmp_str, "Details for descriptor set %p.", (void*)pSet->set);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
LAYOUT_NODE* pLayout = pSet->pLayout;
// Print layout details
sprintf(tmp_str, "Layout #%u, (object %p) for DS %p.", index+1, (void*)pLayout->layout, (void*)pSet->set);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
sprintf(prefix, " [L%u] ", index);
- string DSLstr = xgl_print_xgl_descriptor_set_layout_create_info(&pLayout->createInfo, prefix).c_str();
+ string DSLstr = vk_print_vk_descriptor_set_layout_create_info(&pLayout->createInfo, prefix).c_str();
sprintf(ds_config_str, "%s", DSLstr.c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
index++;
GENERIC_HEADER* pUpdate = pSet->pUpdateStructs;
if (pUpdate) {
sprintf(tmp_str, "Update Chain [UC] for descriptor set %p:", (void*)pSet->set);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
sprintf(prefix, " [UC] ");
sprintf(ds_config_str, "%s", dynamic_display(pUpdate, prefix).c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", ds_config_str);
// TODO : If there is a "view" associated with this update, print CI for that view
}
else {
- sprintf(tmp_str, "No Update Chain for descriptor set %p (xglUpdateDescriptors has not been called)", (void*)pSet->set);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
+ sprintf(tmp_str, "No Update Chain for descriptor set %p (vkUpdateDescriptors has not been called)", (void*)pSet->set);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", tmp_str);
}
}
}
-static void printCB(const XGL_CMD_BUFFER cb)
+static void printCB(const VK_CMD_BUFFER cb)
{
GLOBAL_CB_NODE* pCB = getCBNode(cb);
if (pCB) {
char str[1024];
sprintf(str, "Cmds in CB %p", (void*)cb);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_NONE, "DS", str);
for (vector<CMD_NODE*>::iterator ii=pCB->pCmds.begin(); ii!=pCB->pCmds.end(); ++ii) {
sprintf(str, " CMD#%lu: %s", (*ii)->cmdNumber, cmdTypeToString((*ii)->type).c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cb, 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cb, 0, DRAWSTATE_NONE, "DS", str);
}
}
else {
}
-static void synchAndPrintDSConfig(const XGL_CMD_BUFFER cb)
+static void synchAndPrintDSConfig(const VK_CMD_BUFFER cb)
{
printDSConfig(cb);
printPipeline(cb);
getLayerOptionEnum("DrawStateReportLevel", (uint32_t *) &g_reportingLevel);
g_actionIsDefault = getLayerOptionEnum("DrawStateDebugAction", (uint32_t *) &g_debugAction);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
{
strOpt = getLayerOption("DrawStateLogFilename");
if (strOpt)
}
// initialize Layer dispatch table
// TODO handle multiple GPUs
- xglGetProcAddrType fpNextGPA;
+ vkGetProcAddrType fpNextGPA;
fpNextGPA = pCurObj->pGPA;
assert(fpNextGPA);
- layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);
+ layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);
- xglGetProcAddrType fpGetProcAddr = (xglGetProcAddrType)fpNextGPA((XGL_PHYSICAL_GPU) pCurObj->nextObject, (char *) "xglGetProcAddr");
+ vkGetProcAddrType fpGetProcAddr = (vkGetProcAddrType)fpNextGPA((VK_PHYSICAL_GPU) pCurObj->nextObject, (char *) "vkGetProcAddr");
nextTable.GetProcAddr = fpGetProcAddr;
if (!globalLockInitialized)
{
// TODO/TBD: Need to delete this mutex sometime. How??? One
- // suggestion is to call this during xglCreateInstance(), and then we
- // can clean it up during xglDestroyInstance(). However, that requires
+ // suggestion is to call this during vkCreateInstance(), and then we
+ // can clean it up during vkDestroyInstance(). However, that requires
// that the layer have per-instance locks. We need to come back and
// address this soon.
loader_platform_thread_create_mutex(&globalLock);
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initDrawState);
- XGL_RESULT result = nextTable.CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ VK_RESULT result = nextTable.CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyDevice(VK_DEVICE device)
{
// Free all the memory
loader_platform_thread_lock_mutex(&globalLock);
freePools();
freeLayouts();
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.DestroyDevice(device);
+ VK_RESULT result = nextTable.DestroyDevice(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_RESULT result;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_RESULT result;
/* This entrypoint is NOT going to init its own dispatch table since loader calls here early */
if (!strcmp(pExtName, "DrawState") || !strcmp(pExtName, "drawStateDumpDotFile") ||
!strcmp(pExtName, "drawStateDumpCommandBufferDotFile") || !strcmp(pExtName, "drawStateDumpPngFile"))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (nextTable.GetExtensionSupport != NULL)
{
- result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
} else
{
- result = XGL_ERROR_INVALID_EXTENSION;
+ result = VK_ERROR_INVALID_EXTENSION;
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
if (gpu != NULL)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initDrawState);
- XGL_RESULT result = nextTable.EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ VK_RESULT result = nextTable.EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
return result;
} else
{
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// This layer compatible with all GPUs
*pOutLayerCount = 1;
strncpy((char *) pOutLayers[0], "DrawState", maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(XGL_QUEUE queue, uint32_t cmdBufferCount, const XGL_CMD_BUFFER* pCmdBuffers, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSubmit(VK_QUEUE queue, uint32_t cmdBufferCount, const VK_CMD_BUFFER* pCmdBuffers, VK_FENCE fence)
{
for (uint32_t i=0; i < cmdBufferCount; i++) {
// Validate that cmd buffers have been updated
}
- XGL_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
+ VK_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyObject(XGL_OBJECT object)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyObject(VK_OBJECT object)
{
// TODO : When wrapped objects (such as dynamic state) are destroyed, need to clean up memory
- XGL_RESULT result = nextTable.DestroyObject(object);
+ VK_RESULT result = nextTable.DestroyObject(object);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBufferView(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
BUFFER_NODE* pNewNode = new BUFFER_NODE;
pNewNode->buffer = *pView;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImageView(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImageView(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
IMAGE_NODE *pNewNode = new IMAGE_NODE;
pNewNode->image = *pView;
return result;
}
-static void track_pipeline(const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+static void track_pipeline(const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
+ // Create LL HEAD for this Pipeline
+ loader_platform_thread_lock_mutex(&globalLock);
PIPELINE_NODE* pPipeNode = new PIPELINE_NODE;
memset((void*)pPipeNode, 0, sizeof(PIPELINE_NODE));
pPipeNode->pipeline = *pPipeline;
initPipeline(pPipeNode, pCreateInfo);
+ loader_platform_thread_unlock_mutex(&globalLock);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ VK_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
// Create LL HEAD for this Pipeline
char str[1024];
sprintf(str, "Created Gfx Pipeline %p", (void*)*pPipeline);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (XGL_BASE_OBJECT)pPipeline, 0, DRAWSTATE_NONE, "DS", str);
- loader_platform_thread_lock_mutex(&globalLock);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, *pPipeline, 0, DRAWSTATE_NONE, "DS", str);
track_pipeline(pCreateInfo, pPipeline);
- loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
+ VK_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
// Create LL HEAD for this Pipeline
char str[1024];
sprintf(str, "Created Gfx Pipeline %p (derived from pipeline %p)", (void*)*pPipeline, basePipeline);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (XGL_BASE_OBJECT)pPipeline, 0, DRAWSTATE_NONE, "DS", str);
- loader_platform_thread_lock_mutex(&globalLock);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, *pPipeline, 0, DRAWSTATE_NONE, "DS", str);
track_pipeline(pCreateInfo, pPipeline);
loader_platform_thread_unlock_mutex(&globalLock);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSampler(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSampler(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler)
{
- XGL_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
SAMPLER_NODE* pNewNode = new SAMPLER_NODE;
pNewNode->sampler = *pSampler;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(XGL_DEVICE device, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout(VK_DEVICE device, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
- XGL_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
+ if (VK_SUCCESS == result) {
LAYOUT_NODE* pNewNode = new LAYOUT_NODE;
if (NULL == pNewNode) {
char str[1024];
- sprintf(str, "Out of memory while attempting to allocate LAYOUT_NODE in xglCreateDescriptorSetLayout()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, *pSetLayout, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ sprintf(str, "Out of memory while attempting to allocate LAYOUT_NODE in vkCreateDescriptorSetLayout()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, *pSetLayout, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
memset(pNewNode, 0, sizeof(LAYOUT_NODE));
- memcpy((void*)&pNewNode->createInfo, pCreateInfo, sizeof(XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO));
- pNewNode->createInfo.pBinding = new XGL_DESCRIPTOR_SET_LAYOUT_BINDING[pCreateInfo->count];
- memcpy((void*)pNewNode->createInfo.pBinding, pCreateInfo->pBinding, sizeof(XGL_DESCRIPTOR_SET_LAYOUT_BINDING)*pCreateInfo->count);
+ memcpy((void*)&pNewNode->createInfo, pCreateInfo, sizeof(VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO));
+ pNewNode->createInfo.pBinding = new VK_DESCRIPTOR_SET_LAYOUT_BINDING[pCreateInfo->count];
+ memcpy((void*)pNewNode->createInfo.pBinding, pCreateInfo->pBinding, sizeof(VK_DESCRIPTOR_SET_LAYOUT_BINDING)*pCreateInfo->count);
uint32_t totalCount = 0;
for (uint32_t i=0; i<pCreateInfo->count; i++) {
totalCount += pCreateInfo->pBinding[i].count;
if (pCreateInfo->pBinding[i].pImmutableSamplers) {
void** ppImmutableSamplers = (void**)&pNewNode->createInfo.pBinding[i].pImmutableSamplers;
- *ppImmutableSamplers = malloc(sizeof(XGL_SAMPLER)*pCreateInfo->pBinding[i].count);
- memcpy(*ppImmutableSamplers, pCreateInfo->pBinding[i].pImmutableSamplers, pCreateInfo->pBinding[i].count*sizeof(XGL_SAMPLER));
+ *ppImmutableSamplers = malloc(sizeof(VK_SAMPLER)*pCreateInfo->pBinding[i].count);
+ memcpy(*ppImmutableSamplers, pCreateInfo->pBinding[i].pImmutableSamplers, pCreateInfo->pBinding[i].count*sizeof(VK_SAMPLER));
}
}
if (totalCount > 0) {
- pNewNode->pTypes = new XGL_DESCRIPTOR_TYPE[totalCount];
+ pNewNode->pTypes = new VK_DESCRIPTOR_TYPE[totalCount];
uint32_t offset = 0;
uint32_t j = 0;
for (uint32_t i=0; i<pCreateInfo->count; i++) {
return result;
}
-XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(XGL_DEVICE device, uint32_t setLayoutArrayCount, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
+VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(VK_DEVICE device, uint32_t setLayoutArrayCount, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
{
- XGL_RESULT result = nextTable.CreateDescriptorSetLayoutChain(device, setLayoutArrayCount, pSetLayoutArray, pLayoutChain);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateDescriptorSetLayoutChain(device, setLayoutArrayCount, pSetLayoutArray, pLayoutChain);
+ if (VK_SUCCESS == result) {
// TODO : Need to capture the layout chains
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(XGL_DEVICE device, XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(VK_DEVICE device, VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
- XGL_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
POOL_NODE* pPoolNode = poolMap.begin()->second;
if (!pPoolNode) {
char str[1024];
sprintf(str, "Unable to find pool node");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INTERNAL_ERROR, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INTERNAL_ERROR, "DS", str);
}
else {
pPoolNode->updateActive = 1;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(XGL_DEVICE device, XGL_CMD_BUFFER cmd)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(VK_DEVICE device, VK_CMD_BUFFER cmd)
{
- XGL_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
POOL_NODE* pPoolNode = poolMap.begin()->second;
if (!pPoolNode) {
char str[1024];
sprintf(str, "Unable to find pool node");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INTERNAL_ERROR, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_INTERNAL_ERROR, "DS", str);
}
else {
if (!pPoolNode->updateActive) {
char str[1024];
- sprintf(str, "You must call xglBeginDescriptorPoolUpdate() before this call to xglEndDescriptorPoolUpdate()!");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_DS_END_WITHOUT_BEGIN, "DS", str);
+ sprintf(str, "You must call vkBeginDescriptorPoolUpdate() before this call to vkEndDescriptorPoolUpdate()!");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_DS_END_WITHOUT_BEGIN, "DS", str);
}
else {
pPoolNode->updateActive = 0;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(XGL_DEVICE device, XGL_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_POOL* pDescriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(VK_DEVICE device, VK_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_POOL* pDescriptorPool)
{
- XGL_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
+ if (VK_SUCCESS == result) {
// Insert this pool into Global Pool LL at head
char str[1024];
sprintf(str, "Created Descriptor Pool %p", (void*)*pDescriptorPool);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (XGL_BASE_OBJECT)pDescriptorPool, 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, (VK_BASE_OBJECT)pDescriptorPool, 0, DRAWSTATE_NONE, "DS", str);
loader_platform_thread_lock_mutex(&globalLock);
POOL_NODE* pNewNode = new POOL_NODE;
if (NULL == pNewNode) {
char str[1024];
- sprintf(str, "Out of memory while attempting to allocate POOL_NODE in xglCreateDescriptorPool()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, (XGL_BASE_OBJECT)*pDescriptorPool, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ sprintf(str, "Out of memory while attempting to allocate POOL_NODE in vkCreateDescriptorPool()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, (VK_BASE_OBJECT)*pDescriptorPool, 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
else {
memset(pNewNode, 0, sizeof(POOL_NODE));
- XGL_DESCRIPTOR_POOL_CREATE_INFO* pCI = (XGL_DESCRIPTOR_POOL_CREATE_INFO*)&pNewNode->createInfo;
- memcpy((void*)pCI, pCreateInfo, sizeof(XGL_DESCRIPTOR_POOL_CREATE_INFO));
+ VK_DESCRIPTOR_POOL_CREATE_INFO* pCI = (VK_DESCRIPTOR_POOL_CREATE_INFO*)&pNewNode->createInfo;
+ memcpy((void*)pCI, pCreateInfo, sizeof(VK_DESCRIPTOR_POOL_CREATE_INFO));
if (pNewNode->createInfo.count) {
- size_t typeCountSize = pNewNode->createInfo.count * sizeof(XGL_DESCRIPTOR_TYPE_COUNT);
- pNewNode->createInfo.pTypeCount = new XGL_DESCRIPTOR_TYPE_COUNT[typeCountSize];
+ size_t typeCountSize = pNewNode->createInfo.count * sizeof(VK_DESCRIPTOR_TYPE_COUNT);
+ pNewNode->createInfo.pTypeCount = new VK_DESCRIPTOR_TYPE_COUNT[typeCountSize];
memcpy((void*)pNewNode->createInfo.pTypeCount, pCreateInfo->pTypeCount, typeCountSize);
}
pNewNode->poolUsage = poolUsage;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(XGL_DESCRIPTOR_POOL descriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(VK_DESCRIPTOR_POOL descriptorPool)
{
- XGL_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
+ if (VK_SUCCESS == result) {
clearDescriptorPool(descriptorPool);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, XGL_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts, XGL_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, VK_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts, VK_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
{
- XGL_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
- if ((XGL_SUCCESS == result) || (*pCount > 0)) {
+ VK_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
+ if ((VK_SUCCESS == result) || (*pCount > 0)) {
POOL_NODE *pPoolNode = getPoolNode(descriptorPool);
if (!pPoolNode) {
char str[1024];
- sprintf(str, "Unable to find pool node for pool %p specified in xglAllocDescriptorSets() call", (void*)descriptorPool);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, descriptorPool, 0, DRAWSTATE_INVALID_POOL, "DS", str);
+ sprintf(str, "Unable to find pool node for pool %p specified in vkAllocDescriptorSets() call", (void*)descriptorPool);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, descriptorPool, 0, DRAWSTATE_INVALID_POOL, "DS", str);
}
else {
for (uint32_t i = 0; i < *pCount; i++) {
char str[1024];
sprintf(str, "Created Descriptor Set %p", (void*)pDescriptorSets[i]);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_NONE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_NONE, "DS", str);
// Create new set node and add to head of pool nodes
SET_NODE* pNewNode = new SET_NODE;
if (NULL == pNewNode) {
char str[1024];
- sprintf(str, "Out of memory while attempting to allocate SET_NODE in xglAllocDescriptorSets()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
+ sprintf(str, "Out of memory while attempting to allocate SET_NODE in vkAllocDescriptorSets()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_OUT_OF_MEMORY, "DS", str);
}
else {
memset(pNewNode, 0, sizeof(SET_NODE));
LAYOUT_NODE* pLayout = getLayoutNode(pSetLayouts[i]);
if (NULL == pLayout) {
char str[1024];
- sprintf(str, "Unable to find set layout node for layout %p specified in xglAllocDescriptorSets() call", (void*)pSetLayouts[i]);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pSetLayouts[i], 0, DRAWSTATE_INVALID_LAYOUT, "DS", str);
+ sprintf(str, "Unable to find set layout node for layout %p specified in vkAllocDescriptorSets() call", (void*)pSetLayouts[i]);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pSetLayouts[i], 0, DRAWSTATE_INVALID_LAYOUT, "DS", str);
}
pNewNode->pLayout = pLayout;
pNewNode->pool = descriptorPool;
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglClearDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets)
+VK_LAYER_EXPORT void VKAPI vkClearDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets)
{
for (uint32_t i = 0; i < count; i++) {
clearDescriptorSet(pDescriptorSets[i]);
nextTable.ClearDescriptorSets(descriptorPool, count, pDescriptorSets);
}
-XGL_LAYER_EXPORT void XGLAPI xglUpdateDescriptors(XGL_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
+VK_LAYER_EXPORT void VKAPI vkUpdateDescriptors(VK_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
{
SET_NODE* pSet = getSetNode(descriptorSet);
if (!dsUpdateActive(descriptorSet)) {
char str[1024];
- sprintf(str, "You must call xglBeginDescriptorPoolUpdate() before this call to xglUpdateDescriptors()!");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pSet->pool, 0, DRAWSTATE_UPDATE_WITHOUT_BEGIN, "DS", str);
+ sprintf(str, "You must call vkBeginDescriptorPoolUpdate() before this call to vkUpdateDescriptors()!");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pSet->pool, 0, DRAWSTATE_UPDATE_WITHOUT_BEGIN, "DS", str);
}
else {
- // pUpdateChain is a Linked-list of XGL_UPDATE_* structures defining the mappings for the descriptors
+ // pUpdateChain is a Linked-list of VK_UPDATE_* structures defining the mappings for the descriptors
dsUpdate(descriptorSet, updateCount, ppUpdateArray);
}
nextTable.UpdateDescriptors(descriptorSet, updateCount, ppUpdateArray);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
- insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, XGL_STATE_BIND_VIEWPORT);
+ VK_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
+ insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, VK_STATE_BIND_VIEWPORT);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
- insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, XGL_STATE_BIND_RASTER);
+ VK_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
+ insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, VK_STATE_BIND_RASTER);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
- insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, XGL_STATE_BIND_COLOR_BLEND);
+ VK_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
+ insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, VK_STATE_BIND_COLOR_BLEND);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
- insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, XGL_STATE_BIND_DEPTH_STENCIL);
+ VK_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
+ insertDynamicState(*pState, (GENERIC_HEADER*)pCreateInfo, VK_STATE_BIND_DEPTH_STENCIL);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer)
{
- XGL_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
GLOBAL_CB_NODE* pCB = new GLOBAL_CB_NODE;
memset(pCB, 0, sizeof(GLOBAL_CB_NODE));
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
- XGL_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ if (VK_SUCCESS == result) {
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
if (CB_NEW != pCB->state)
resetCB(cmdBuffer);
pCB->state = CB_UPDATE_ACTIVE;
if (pBeginInfo->pNext) {
- XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO* pCbGfxBI = (XGL_CMD_BUFFER_GRAPHICS_BEGIN_INFO*)pBeginInfo->pNext;
- if (XGL_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO == pCbGfxBI->sType) {
+ VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO* pCbGfxBI = (VK_CMD_BUFFER_GRAPHICS_BEGIN_INFO*)pBeginInfo->pNext;
+ if (VK_STRUCTURE_TYPE_CMD_BUFFER_GRAPHICS_BEGIN_INFO == pCbGfxBI->sType) {
pCB->activeRenderPass = pCbGfxBI->renderPassContinue.renderPass;
}
}
}
else {
char str[1024];
- sprintf(str, "In xglBeginCommandBuffer() and unable to find CmdBuffer Node for CB %p!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ sprintf(str, "In vkBeginCommandBuffer() and unable to find CmdBuffer Node for CB %p!", (void*)cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
updateCBTracking(cmdBuffer);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
- XGL_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
+ if (VK_SUCCESS == result) {
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
pCB->state = CB_UPDATE_COMPLETE;
}
else {
char str[1024];
- sprintf(str, "In xglEndCommandBuffer() and unable to find CmdBuffer Node for CB %p!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ sprintf(str, "In vkEndCommandBuffer() and unable to find CmdBuffer Node for CB %p!", (void*)cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
updateCBTracking(cmdBuffer);
//cbDumpDotFile("cb_dump.dot");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
- XGL_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
+ if (VK_SUCCESS == result) {
resetCB(cmdBuffer);
updateCBTracking(cmdBuffer);
}
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindPipeline(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline)
+VK_LAYER_EXPORT void VKAPI vkCmdBindPipeline(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to bind Pipeline %p that doesn't exist!", (void*)pipeline);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pipeline, 0, DRAWSTATE_INVALID_PIPELINE, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pipeline, 0, DRAWSTATE_INVALID_PIPELINE, "DS", str);
}
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBindPipeline(cmdBuffer, pipelineBindPoint, pipeline);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDynamicStateObject(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDynamicStateObject(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state)
{
setLastBoundDynamicState(cmdBuffer, state, stateBindPoint);
nextTable.CmdBindDynamicStateObject(cmdBuffer, stateBindPoint, state);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDescriptorSets(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDescriptorSets(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
// TODO : This check here needs to be made at QueueSubmit time
/*
char str[1024];
- sprintf(str, "You must call xglEndDescriptorPoolUpdate(%p) before this call to xglCmdBindDescriptorSet()!", (void*)descriptorSet);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, descriptorSet, 0, DRAWSTATE_BINDING_DS_NO_END_UPDATE, "DS", str);
+ sprintf(str, "You must call vkEndDescriptorPoolUpdate(%p) before this call to vkCmdBindDescriptorSet()!", (void*)descriptorSet);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, descriptorSet, 0, DRAWSTATE_BINDING_DS_NO_END_UPDATE, "DS", str);
*/
}
loader_platform_thread_lock_mutex(&globalLock);
g_lastBoundDescriptorSet = pDescriptorSets[i];
loader_platform_thread_unlock_mutex(&globalLock);
char str[1024];
- sprintf(str, "DS %p bound on pipeline %s", (void*)pDescriptorSets[i], string_XGL_PIPELINE_BIND_POINT(pipelineBindPoint));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "DS %p bound on pipeline %s", (void*)pDescriptorSets[i], string_VK_PIPELINE_BIND_POINT(pipelineBindPoint));
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to bind DS %p that doesn't exist!", (void*)pDescriptorSets[i]);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_INVALID_SET, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pDescriptorSets[i], 0, DRAWSTATE_INVALID_SET, "DS", str);
}
}
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBindDescriptorSets(cmdBuffer, pipelineBindPoint, layoutChain, layoutChainSlot, count, pDescriptorSets, pUserData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindIndexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType)
+VK_LAYER_EXPORT void VKAPI vkCmdBindIndexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBindIndexBuffer(cmdBuffer, buffer, offset, indexType);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindVertexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding)
+VK_LAYER_EXPORT void VKAPI vkCmdBindVertexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBindVertexBuffer(cmdBuffer, buffer, offset, binding);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDraw(XGL_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDraw(VK_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
addCmd(pCB, CMD_DRAW);
pCB->drawCount[DRAW]++;
char str[1024];
- sprintf(str, "xglCmdDraw() call #%lu, reporting DS state:", g_drawCount[DRAW]++);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "vkCmdDraw() call #%lu, reporting DS state:", g_drawCount[DRAW]++);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDraw(cmdBuffer, firstVertex, vertexCount, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexed(XGL_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexed(VK_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
addCmd(pCB, CMD_DRAWINDEXED);
pCB->drawCount[DRAW_INDEXED]++;
char str[1024];
- sprintf(str, "xglCmdDrawIndexed() call #%lu, reporting DS state:", g_drawCount[DRAW_INDEXED]++);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "vkCmdDrawIndexed() call #%lu, reporting DS state:", g_drawCount[DRAW_INDEXED]++);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDrawIndexed(cmdBuffer, firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
addCmd(pCB, CMD_DRAWINDIRECT);
pCB->drawCount[DRAW_INDIRECT]++;
char str[1024];
- sprintf(str, "xglCmdDrawIndirect() call #%lu, reporting DS state:", g_drawCount[DRAW_INDIRECT]++);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "vkCmdDrawIndirect() call #%lu, reporting DS state:", g_drawCount[DRAW_INDIRECT]++);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDrawIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexedIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
addCmd(pCB, CMD_DRAWINDEXEDINDIRECT);
pCB->drawCount[DRAW_INDEXED_INDIRECT]++;
char str[1024];
- sprintf(str, "xglCmdDrawIndexedIndirect() call #%lu, reporting DS state:", g_drawCount[DRAW_INDEXED_INDIRECT]++);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
+ sprintf(str, "vkCmdDrawIndexedIndirect() call #%lu, reporting DS state:", g_drawCount[DRAW_INDEXED_INDIRECT]++);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_NONE, "DS", str);
synchAndPrintDSConfig(cmdBuffer);
}
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDrawIndexedIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatch(XGL_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatch(VK_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDispatch(cmdBuffer, x, y, z);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatchIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatchIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDispatchIndirect(cmdBuffer, buffer, offset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_COPY* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCopyBuffer(cmdBuffer, srcBuffer, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage,
+ VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage,
+ VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_IMAGE_COPY* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCopyImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBlitImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_IMAGE_BLIT* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdBlitImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_IMAGE_BLIT* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBlitImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBufferToImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBufferToImage(VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCopyBufferToImage(cmdBuffer, srcBuffer, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImageToBuffer(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
- uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImageToBuffer(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
+ uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCopyImageToBuffer(cmdBuffer, srcImage, srcImageLayout, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCloneImageData(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout)
+VK_LAYER_EXPORT void VKAPI vkCmdCloneImageData(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdCloneImageData(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdUpdateBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdUpdateBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdUpdateBuffer(cmdBuffer, destBuffer, destOffset, dataSize, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdFillBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data)
+VK_LAYER_EXPORT void VKAPI vkCmdFillBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdFillBuffer(cmdBuffer, destBuffer, destOffset, fillSize, data);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearColorImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR color,
- uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearColorImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR color,
+ uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdClearColorImage(cmdBuffer, image, imageLayout, color, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearDepthStencil(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout,
- float depth, uint32_t stencil,
- uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearDepthStencil(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout,
+ float depth, uint32_t stencil,
+ uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdClearDepthStencil(cmdBuffer, image, imageLayout, depth, stencil, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResolveImage(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects)
+VK_LAYER_EXPORT void VKAPI vkCmdResolveImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdResolveImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, rectCount, pRects);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdSetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdSetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdResetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdResetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWaitEvents(XGL_CMD_BUFFER cmdBuffer, const XGL_EVENT_WAIT_INFO* pWaitInfo)
+VK_LAYER_EXPORT void VKAPI vkCmdWaitEvents(VK_CMD_BUFFER cmdBuffer, const VK_EVENT_WAIT_INFO* pWaitInfo)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdWaitEvents(cmdBuffer, pWaitInfo);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdPipelineBarrier(XGL_CMD_BUFFER cmdBuffer, const XGL_PIPELINE_BARRIER* pBarrier)
+VK_LAYER_EXPORT void VKAPI vkCmdPipelineBarrier(VK_CMD_BUFFER cmdBuffer, const VK_PIPELINE_BARRIER* pBarrier)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdPipelineBarrier(cmdBuffer, pBarrier);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBeginQuery(cmdBuffer, queryPool, slot, flags);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot)
+VK_LAYER_EXPORT void VKAPI vkCmdEndQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdEndQuery(cmdBuffer, queryPool, slot);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetQueryPool(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
+VK_LAYER_EXPORT void VKAPI vkCmdResetQueryPool(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdResetQueryPool(cmdBuffer, queryPool, startQuery, queryCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWriteTimestamp(XGL_CMD_BUFFER cmdBuffer, XGL_TIMESTAMP_TYPE timestampType, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdWriteTimestamp(VK_CMD_BUFFER cmdBuffer, VK_TIMESTAMP_TYPE timestampType, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdWriteTimestamp(cmdBuffer, timestampType, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdInitAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdInitAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdInitAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdLoadAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER srcBuffer, XGL_GPU_SIZE srcOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdLoadAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER srcBuffer, VK_GPU_SIZE srcOffset)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdLoadAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, srcBuffer, srcOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSaveAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdSaveAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdSaveAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(XGL_DEVICE device, const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo, XGL_FRAMEBUFFER* pFramebuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(VK_DEVICE device, const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo, VK_FRAMEBUFFER* pFramebuffer)
{
- XGL_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
+ if (VK_SUCCESS == result) {
// Shadow create info and store in map
- XGL_FRAMEBUFFER_CREATE_INFO* localFBCI = new XGL_FRAMEBUFFER_CREATE_INFO(*pCreateInfo);
+ VK_FRAMEBUFFER_CREATE_INFO* localFBCI = new VK_FRAMEBUFFER_CREATE_INFO(*pCreateInfo);
if (pCreateInfo->pColorAttachments) {
- localFBCI->pColorAttachments = new XGL_COLOR_ATTACHMENT_BIND_INFO[localFBCI->colorAttachmentCount];
- memcpy((void*)localFBCI->pColorAttachments, pCreateInfo->pColorAttachments, localFBCI->colorAttachmentCount*sizeof(XGL_COLOR_ATTACHMENT_BIND_INFO));
+ localFBCI->pColorAttachments = new VK_COLOR_ATTACHMENT_BIND_INFO[localFBCI->colorAttachmentCount];
+ memcpy((void*)localFBCI->pColorAttachments, pCreateInfo->pColorAttachments, localFBCI->colorAttachmentCount*sizeof(VK_COLOR_ATTACHMENT_BIND_INFO));
}
if (pCreateInfo->pDepthStencilAttachment) {
- localFBCI->pDepthStencilAttachment = new XGL_DEPTH_STENCIL_BIND_INFO[localFBCI->colorAttachmentCount];
- memcpy((void*)localFBCI->pDepthStencilAttachment, pCreateInfo->pDepthStencilAttachment, localFBCI->colorAttachmentCount*sizeof(XGL_DEPTH_STENCIL_BIND_INFO));
+ localFBCI->pDepthStencilAttachment = new VK_DEPTH_STENCIL_BIND_INFO[localFBCI->colorAttachmentCount];
+ memcpy((void*)localFBCI->pDepthStencilAttachment, pCreateInfo->pDepthStencilAttachment, localFBCI->colorAttachmentCount*sizeof(VK_DEPTH_STENCIL_BIND_INFO));
}
frameBufferMap[*pFramebuffer] = localFBCI;
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, XGL_RENDER_PASS* pRenderPass)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateRenderPass(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo, VK_RENDER_PASS* pRenderPass)
{
- XGL_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
+ if (VK_SUCCESS == result) {
// Shadow create info and store in map
- XGL_RENDER_PASS_CREATE_INFO* localRPCI = new XGL_RENDER_PASS_CREATE_INFO(*pCreateInfo);
+ VK_RENDER_PASS_CREATE_INFO* localRPCI = new VK_RENDER_PASS_CREATE_INFO(*pCreateInfo);
if (pCreateInfo->pColorLoadOps) {
- localRPCI->pColorLoadOps = new XGL_ATTACHMENT_LOAD_OP[localRPCI->colorAttachmentCount];
- memcpy((void*)localRPCI->pColorLoadOps, pCreateInfo->pColorLoadOps, localRPCI->colorAttachmentCount*sizeof(XGL_ATTACHMENT_LOAD_OP));
+ localRPCI->pColorLoadOps = new VK_ATTACHMENT_LOAD_OP[localRPCI->colorAttachmentCount];
+ memcpy((void*)localRPCI->pColorLoadOps, pCreateInfo->pColorLoadOps, localRPCI->colorAttachmentCount*sizeof(VK_ATTACHMENT_LOAD_OP));
}
if (pCreateInfo->pColorStoreOps) {
- localRPCI->pColorStoreOps = new XGL_ATTACHMENT_STORE_OP[localRPCI->colorAttachmentCount];
- memcpy((void*)localRPCI->pColorStoreOps, pCreateInfo->pColorStoreOps, localRPCI->colorAttachmentCount*sizeof(XGL_ATTACHMENT_STORE_OP));
+ localRPCI->pColorStoreOps = new VK_ATTACHMENT_STORE_OP[localRPCI->colorAttachmentCount];
+ memcpy((void*)localRPCI->pColorStoreOps, pCreateInfo->pColorStoreOps, localRPCI->colorAttachmentCount*sizeof(VK_ATTACHMENT_STORE_OP));
}
if (pCreateInfo->pColorLoadClearValues) {
- localRPCI->pColorLoadClearValues = new XGL_CLEAR_COLOR[localRPCI->colorAttachmentCount];
- memcpy((void*)localRPCI->pColorLoadClearValues, pCreateInfo->pColorLoadClearValues, localRPCI->colorAttachmentCount*sizeof(XGL_CLEAR_COLOR));
+ localRPCI->pColorLoadClearValues = new VK_CLEAR_COLOR[localRPCI->colorAttachmentCount];
+ memcpy((void*)localRPCI->pColorLoadClearValues, pCreateInfo->pColorLoadClearValues, localRPCI->colorAttachmentCount*sizeof(VK_CLEAR_COLOR));
}
renderPassMap[*pRenderPass] = localRPCI;
}
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginRenderPass(XGL_CMD_BUFFER cmdBuffer, const XGL_RENDER_PASS_BEGIN *pRenderPassBegin)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginRenderPass(VK_CMD_BUFFER cmdBuffer, const VK_RENDER_PASS_BEGIN *pRenderPassBegin)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
pCB->activeRenderPass = pRenderPassBegin->renderPass;
pCB->framebuffer = pRenderPassBegin->framebuffer;
if (pCB->lastBoundPipeline) {
- validatePipelineState(pCB, XGL_PIPELINE_BIND_POINT_GRAPHICS, pCB->lastBoundPipeline);
+ validatePipelineState(pCB, VK_PIPELINE_BIND_POINT_GRAPHICS, pCB->lastBoundPipeline);
}
} else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdBeginRenderPass(cmdBuffer, pRenderPassBegin);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndRenderPass(XGL_CMD_BUFFER cmdBuffer, XGL_RENDER_PASS renderPass)
+VK_LAYER_EXPORT void VKAPI vkCmdEndRenderPass(VK_CMD_BUFFER cmdBuffer, VK_RENDER_PASS renderPass)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdEndRenderPass(cmdBuffer, renderPass);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
// This layer intercepts callbacks
- XGL_LAYER_DBG_FUNCTION_NODE* pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));
+ VK_LAYER_DBG_FUNCTION_NODE* pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));
#if ALLOC_DEBUG
printf("Alloc34 #%lu pNewDbgFuncNode addr(%p)\n", ++g_alloc_count, (void*)pNewDbgFuncNode);
#endif
if (!pNewDbgFuncNode)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;
pNewDbgFuncNode->pUserData = pUserData;
pNewDbgFuncNode->pNext = g_pDbgFunctionHead;
g_pDbgFunctionHead = pNewDbgFuncNode;
// force callbacks if DebugAction hasn't been set already other than initial value
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;
+ g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;
}
- XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
+ VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
- XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
+ VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
+ VK_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
while (pTrav) {
if (pTrav->pfnMsgCallback == pfnMsgCallback) {
pPrev->pNext = pTrav->pNext;
if (g_pDbgFunctionHead == NULL)
{
if (g_actionIsDefault)
- g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+ g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
else
- g_debugAction = (XGL_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)XGL_DBG_LAYER_ACTION_CALLBACK));
+ g_debugAction = (VK_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)VK_DBG_LAYER_ACTION_CALLBACK));
}
- XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
+ VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerBegin(XGL_CMD_BUFFER cmdBuffer, const char* pMarker)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerBegin(VK_CMD_BUFFER cmdBuffer, const char* pMarker)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDbgMarkerBegin(cmdBuffer, pMarker);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerEnd(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerEnd(VK_CMD_BUFFER cmdBuffer)
{
GLOBAL_CB_NODE* pCB = getCBNode(cmdBuffer);
if (pCB) {
else {
char str[1024];
sprintf(str, "Attempt to use CmdBuffer %p that doesn't exist!", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, DRAWSTATE_INVALID_CMD_BUFFER, "DS", str);
}
nextTable.CmdDbgMarkerEnd(cmdBuffer);
}
// FIXME: NEED WINDOWS EQUIVALENT
char str[1024];
sprintf(str, "Cannot execute dot program yet on Windows.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_MISSING_DOT_PROGRAM, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_MISSING_DOT_PROGRAM, "DS", str);
#else // WIN32
char dotExe[32] = "/usr/bin/dot";
if( access(dotExe, X_OK) != -1) {
else {
char str[1024];
sprintf(str, "Cannot execute dot program at (%s) to dump requested %s file.", dotExe, outFileName);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_MISSING_DOT_PROGRAM, "DS", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, DRAWSTATE_MISSING_DOT_PROGRAM, "DS", str);
}
#endif // WIN32
}
-XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)
+VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpu == NULL)
return NULL;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initDrawState);
- if (!strcmp(funcName, "xglGetProcAddr"))
- return (void *) xglGetProcAddr;
- if (!strcmp(funcName, "xglCreateDevice"))
- return (void*) xglCreateDevice;
- if (!strcmp(funcName, "xglDestroyDevice"))
- return (void*) xglDestroyDevice;
- if (!strcmp(funcName, "xglGetExtensionSupport"))
- return (void*) xglGetExtensionSupport;
- if (!strcmp(funcName, "xglEnumerateLayers"))
- return (void*) xglEnumerateLayers;
- if (!strcmp(funcName, "xglQueueSubmit"))
- return (void*) xglQueueSubmit;
- if (!strcmp(funcName, "xglDestroyObject"))
- return (void*) xglDestroyObject;
- if (!strcmp(funcName, "xglCreateBufferView"))
- return (void*) xglCreateBufferView;
- if (!strcmp(funcName, "xglCreateImageView"))
- return (void*) xglCreateImageView;
- if (!strcmp(funcName, "xglCreateGraphicsPipeline"))
- return (void*) xglCreateGraphicsPipeline;
- if (!strcmp(funcName, "xglCreateGraphicsPipelineDerivative"))
- return (void*) xglCreateGraphicsPipelineDerivative;
- if (!strcmp(funcName, "xglCreateSampler"))
- return (void*) xglCreateSampler;
- if (!strcmp(funcName, "xglCreateDescriptorSetLayout"))
- return (void*) xglCreateDescriptorSetLayout;
- if (!strcmp(funcName, "xglCreateDescriptorSetLayoutChain"))
- return (void*) xglCreateDescriptorSetLayoutChain;
- if (!strcmp(funcName, "xglBeginDescriptorPoolUpdate"))
- return (void*) xglBeginDescriptorPoolUpdate;
- if (!strcmp(funcName, "xglEndDescriptorPoolUpdate"))
- return (void*) xglEndDescriptorPoolUpdate;
- if (!strcmp(funcName, "xglCreateDescriptorPool"))
- return (void*) xglCreateDescriptorPool;
- if (!strcmp(funcName, "xglResetDescriptorPool"))
- return (void*) xglResetDescriptorPool;
- if (!strcmp(funcName, "xglAllocDescriptorSets"))
- return (void*) xglAllocDescriptorSets;
- if (!strcmp(funcName, "xglClearDescriptorSets"))
- return (void*) xglClearDescriptorSets;
- if (!strcmp(funcName, "xglUpdateDescriptors"))
- return (void*) xglUpdateDescriptors;
- if (!strcmp(funcName, "xglCreateDynamicViewportState"))
- return (void*) xglCreateDynamicViewportState;
- if (!strcmp(funcName, "xglCreateDynamicRasterState"))
- return (void*) xglCreateDynamicRasterState;
- if (!strcmp(funcName, "xglCreateDynamicColorBlendState"))
- return (void*) xglCreateDynamicColorBlendState;
- if (!strcmp(funcName, "xglCreateDynamicDepthStencilState"))
- return (void*) xglCreateDynamicDepthStencilState;
- if (!strcmp(funcName, "xglCreateCommandBuffer"))
- return (void*) xglCreateCommandBuffer;
- if (!strcmp(funcName, "xglBeginCommandBuffer"))
- return (void*) xglBeginCommandBuffer;
- if (!strcmp(funcName, "xglEndCommandBuffer"))
- return (void*) xglEndCommandBuffer;
- if (!strcmp(funcName, "xglResetCommandBuffer"))
- return (void*) xglResetCommandBuffer;
- if (!strcmp(funcName, "xglCmdBindPipeline"))
- return (void*) xglCmdBindPipeline;
- if (!strcmp(funcName, "xglCmdBindDynamicStateObject"))
- return (void*) xglCmdBindDynamicStateObject;
- if (!strcmp(funcName, "xglCmdBindDescriptorSets"))
- return (void*) xglCmdBindDescriptorSets;
- if (!strcmp(funcName, "xglCmdBindVertexBuffer"))
- return (void*) xglCmdBindVertexBuffer;
- if (!strcmp(funcName, "xglCmdBindIndexBuffer"))
- return (void*) xglCmdBindIndexBuffer;
- if (!strcmp(funcName, "xglCmdDraw"))
- return (void*) xglCmdDraw;
- if (!strcmp(funcName, "xglCmdDrawIndexed"))
- return (void*) xglCmdDrawIndexed;
- if (!strcmp(funcName, "xglCmdDrawIndirect"))
- return (void*) xglCmdDrawIndirect;
- if (!strcmp(funcName, "xglCmdDrawIndexedIndirect"))
- return (void*) xglCmdDrawIndexedIndirect;
- if (!strcmp(funcName, "xglCmdDispatch"))
- return (void*) xglCmdDispatch;
- if (!strcmp(funcName, "xglCmdDispatchIndirect"))
- return (void*) xglCmdDispatchIndirect;
- if (!strcmp(funcName, "xglCmdCopyBuffer"))
- return (void*) xglCmdCopyBuffer;
- if (!strcmp(funcName, "xglCmdCopyImage"))
- return (void*) xglCmdCopyImage;
- if (!strcmp(funcName, "xglCmdCopyBufferToImage"))
- return (void*) xglCmdCopyBufferToImage;
- if (!strcmp(funcName, "xglCmdCopyImageToBuffer"))
- return (void*) xglCmdCopyImageToBuffer;
- if (!strcmp(funcName, "xglCmdCloneImageData"))
- return (void*) xglCmdCloneImageData;
- if (!strcmp(funcName, "xglCmdUpdateBuffer"))
- return (void*) xglCmdUpdateBuffer;
- if (!strcmp(funcName, "xglCmdFillBuffer"))
- return (void*) xglCmdFillBuffer;
- if (!strcmp(funcName, "xglCmdClearColorImage"))
- return (void*) xglCmdClearColorImage;
- if (!strcmp(funcName, "xglCmdClearDepthStencil"))
- return (void*) xglCmdClearDepthStencil;
- if (!strcmp(funcName, "xglCmdResolveImage"))
- return (void*) xglCmdResolveImage;
- if (!strcmp(funcName, "xglCmdSetEvent"))
- return (void*) xglCmdSetEvent;
- if (!strcmp(funcName, "xglCmdResetEvent"))
- return (void*) xglCmdResetEvent;
- if (!strcmp(funcName, "xglCmdWaitEvents"))
- return (void*) xglCmdWaitEvents;
- if (!strcmp(funcName, "xglCmdPipelineBarrier"))
- return (void*) xglCmdPipelineBarrier;
- if (!strcmp(funcName, "xglCmdBeginQuery"))
- return (void*) xglCmdBeginQuery;
- if (!strcmp(funcName, "xglCmdEndQuery"))
- return (void*) xglCmdEndQuery;
- if (!strcmp(funcName, "xglCmdResetQueryPool"))
- return (void*) xglCmdResetQueryPool;
- if (!strcmp(funcName, "xglCmdWriteTimestamp"))
- return (void*) xglCmdWriteTimestamp;
- if (!strcmp(funcName, "xglCmdInitAtomicCounters"))
- return (void*) xglCmdInitAtomicCounters;
- if (!strcmp(funcName, "xglCmdLoadAtomicCounters"))
- return (void*) xglCmdLoadAtomicCounters;
- if (!strcmp(funcName, "xglCmdSaveAtomicCounters"))
- return (void*) xglCmdSaveAtomicCounters;
- if (!strcmp(funcName, "xglCreateFramebuffer"))
- return (void*) xglCreateFramebuffer;
- if (!strcmp(funcName, "xglCreateRenderPass"))
- return (void*) xglCreateRenderPass;
- if (!strcmp(funcName, "xglCmdBeginRenderPass"))
- return (void*) xglCmdBeginRenderPass;
- if (!strcmp(funcName, "xglCmdEndRenderPass"))
- return (void*) xglCmdEndRenderPass;
- if (!strcmp(funcName, "xglDbgRegisterMsgCallback"))
- return (void*) xglDbgRegisterMsgCallback;
- if (!strcmp(funcName, "xglDbgUnregisterMsgCallback"))
- return (void*) xglDbgUnregisterMsgCallback;
- if (!strcmp(funcName, "xglCmdDbgMarkerBegin"))
- return (void*) xglCmdDbgMarkerBegin;
- if (!strcmp(funcName, "xglCmdDbgMarkerEnd"))
- return (void*) xglCmdDbgMarkerEnd;
+ if (!strcmp(funcName, "vkGetProcAddr"))
+ return (void *) vkGetProcAddr;
+ if (!strcmp(funcName, "vkCreateDevice"))
+ return (void*) vkCreateDevice;
+ if (!strcmp(funcName, "vkDestroyDevice"))
+ return (void*) vkDestroyDevice;
+ if (!strcmp(funcName, "vkGetExtensionSupport"))
+ return (void*) vkGetExtensionSupport;
+ if (!strcmp(funcName, "vkEnumerateLayers"))
+ return (void*) vkEnumerateLayers;
+ if (!strcmp(funcName, "vkQueueSubmit"))
+ return (void*) vkQueueSubmit;
+ if (!strcmp(funcName, "vkDestroyObject"))
+ return (void*) vkDestroyObject;
+ if (!strcmp(funcName, "vkCreateBufferView"))
+ return (void*) vkCreateBufferView;
+ if (!strcmp(funcName, "vkCreateImageView"))
+ return (void*) vkCreateImageView;
+ if (!strcmp(funcName, "vkCreateGraphicsPipeline"))
+ return (void*) vkCreateGraphicsPipeline;
+ if (!strcmp(funcName, "vkCreateGraphicsPipelineDerivative"))
+ return (void*) vkCreateGraphicsPipelineDerivative;
+ if (!strcmp(funcName, "vkCreateSampler"))
+ return (void*) vkCreateSampler;
+ if (!strcmp(funcName, "vkCreateDescriptorSetLayout"))
+ return (void*) vkCreateDescriptorSetLayout;
+ if (!strcmp(funcName, "vkCreateDescriptorSetLayoutChain"))
+ return (void*) vkCreateDescriptorSetLayoutChain;
+ if (!strcmp(funcName, "vkBeginDescriptorPoolUpdate"))
+ return (void*) vkBeginDescriptorPoolUpdate;
+ if (!strcmp(funcName, "vkEndDescriptorPoolUpdate"))
+ return (void*) vkEndDescriptorPoolUpdate;
+ if (!strcmp(funcName, "vkCreateDescriptorPool"))
+ return (void*) vkCreateDescriptorPool;
+ if (!strcmp(funcName, "vkResetDescriptorPool"))
+ return (void*) vkResetDescriptorPool;
+ if (!strcmp(funcName, "vkAllocDescriptorSets"))
+ return (void*) vkAllocDescriptorSets;
+ if (!strcmp(funcName, "vkClearDescriptorSets"))
+ return (void*) vkClearDescriptorSets;
+ if (!strcmp(funcName, "vkUpdateDescriptors"))
+ return (void*) vkUpdateDescriptors;
+ if (!strcmp(funcName, "vkCreateDynamicViewportState"))
+ return (void*) vkCreateDynamicViewportState;
+ if (!strcmp(funcName, "vkCreateDynamicRasterState"))
+ return (void*) vkCreateDynamicRasterState;
+ if (!strcmp(funcName, "vkCreateDynamicColorBlendState"))
+ return (void*) vkCreateDynamicColorBlendState;
+ if (!strcmp(funcName, "vkCreateDynamicDepthStencilState"))
+ return (void*) vkCreateDynamicDepthStencilState;
+ if (!strcmp(funcName, "vkCreateCommandBuffer"))
+ return (void*) vkCreateCommandBuffer;
+ if (!strcmp(funcName, "vkBeginCommandBuffer"))
+ return (void*) vkBeginCommandBuffer;
+ if (!strcmp(funcName, "vkEndCommandBuffer"))
+ return (void*) vkEndCommandBuffer;
+ if (!strcmp(funcName, "vkResetCommandBuffer"))
+ return (void*) vkResetCommandBuffer;
+ if (!strcmp(funcName, "vkCmdBindPipeline"))
+ return (void*) vkCmdBindPipeline;
+ if (!strcmp(funcName, "vkCmdBindDynamicStateObject"))
+ return (void*) vkCmdBindDynamicStateObject;
+ if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
+ return (void*) vkCmdBindDescriptorSets;
+ if (!strcmp(funcName, "vkCmdBindVertexBuffer"))
+ return (void*) vkCmdBindVertexBuffer;
+ if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
+ return (void*) vkCmdBindIndexBuffer;
+ if (!strcmp(funcName, "vkCmdDraw"))
+ return (void*) vkCmdDraw;
+ if (!strcmp(funcName, "vkCmdDrawIndexed"))
+ return (void*) vkCmdDrawIndexed;
+ if (!strcmp(funcName, "vkCmdDrawIndirect"))
+ return (void*) vkCmdDrawIndirect;
+ if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
+ return (void*) vkCmdDrawIndexedIndirect;
+ if (!strcmp(funcName, "vkCmdDispatch"))
+ return (void*) vkCmdDispatch;
+ if (!strcmp(funcName, "vkCmdDispatchIndirect"))
+ return (void*) vkCmdDispatchIndirect;
+ if (!strcmp(funcName, "vkCmdCopyBuffer"))
+ return (void*) vkCmdCopyBuffer;
+ if (!strcmp(funcName, "vkCmdCopyImage"))
+ return (void*) vkCmdCopyImage;
+ if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
+ return (void*) vkCmdCopyBufferToImage;
+ if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
+ return (void*) vkCmdCopyImageToBuffer;
+ if (!strcmp(funcName, "vkCmdCloneImageData"))
+ return (void*) vkCmdCloneImageData;
+ if (!strcmp(funcName, "vkCmdUpdateBuffer"))
+ return (void*) vkCmdUpdateBuffer;
+ if (!strcmp(funcName, "vkCmdFillBuffer"))
+ return (void*) vkCmdFillBuffer;
+ if (!strcmp(funcName, "vkCmdClearColorImage"))
+ return (void*) vkCmdClearColorImage;
+ if (!strcmp(funcName, "vkCmdClearDepthStencil"))
+ return (void*) vkCmdClearDepthStencil;
+ if (!strcmp(funcName, "vkCmdResolveImage"))
+ return (void*) vkCmdResolveImage;
+ if (!strcmp(funcName, "vkCmdSetEvent"))
+ return (void*) vkCmdSetEvent;
+ if (!strcmp(funcName, "vkCmdResetEvent"))
+ return (void*) vkCmdResetEvent;
+ if (!strcmp(funcName, "vkCmdWaitEvents"))
+ return (void*) vkCmdWaitEvents;
+ if (!strcmp(funcName, "vkCmdPipelineBarrier"))
+ return (void*) vkCmdPipelineBarrier;
+ if (!strcmp(funcName, "vkCmdBeginQuery"))
+ return (void*) vkCmdBeginQuery;
+ if (!strcmp(funcName, "vkCmdEndQuery"))
+ return (void*) vkCmdEndQuery;
+ if (!strcmp(funcName, "vkCmdResetQueryPool"))
+ return (void*) vkCmdResetQueryPool;
+ if (!strcmp(funcName, "vkCmdWriteTimestamp"))
+ return (void*) vkCmdWriteTimestamp;
+ if (!strcmp(funcName, "vkCmdInitAtomicCounters"))
+ return (void*) vkCmdInitAtomicCounters;
+ if (!strcmp(funcName, "vkCmdLoadAtomicCounters"))
+ return (void*) vkCmdLoadAtomicCounters;
+ if (!strcmp(funcName, "vkCmdSaveAtomicCounters"))
+ return (void*) vkCmdSaveAtomicCounters;
+ if (!strcmp(funcName, "vkCreateFramebuffer"))
+ return (void*) vkCreateFramebuffer;
+ if (!strcmp(funcName, "vkCreateRenderPass"))
+ return (void*) vkCreateRenderPass;
+ if (!strcmp(funcName, "vkCmdBeginRenderPass"))
+ return (void*) vkCmdBeginRenderPass;
+ if (!strcmp(funcName, "vkCmdEndRenderPass"))
+ return (void*) vkCmdEndRenderPass;
+ if (!strcmp(funcName, "vkDbgRegisterMsgCallback"))
+ return (void*) vkDbgRegisterMsgCallback;
+ if (!strcmp(funcName, "vkDbgUnregisterMsgCallback"))
+ return (void*) vkDbgUnregisterMsgCallback;
+ if (!strcmp(funcName, "vkCmdDbgMarkerBegin"))
+ return (void*) vkCmdDbgMarkerBegin;
+ if (!strcmp(funcName, "vkCmdDbgMarkerEnd"))
+ return (void*) vkCmdDbgMarkerEnd;
if (!strcmp("drawStateDumpDotFile", funcName))
return (void*) drawStateDumpDotFile;
if (!strcmp("drawStateDumpCommandBufferDotFile", funcName))
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
-#include "xglLayer.h"
+#include "vkLayer.h"
#include <vector>
using namespace std;
DRAWSTATE_UPDATE_WITHOUT_BEGIN, // Attempt to update descriptors w/o calling BeginDescriptorPoolUpdate
DRAWSTATE_INVALID_PIPELINE, // Invalid Pipeline referenced
DRAWSTATE_INVALID_CMD_BUFFER, // Invalid CmdBuffer referenced
- DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, // binding in xglCmdBindVertexData() too large for PSO's pVertexBindingDescriptions array
+ DRAWSTATE_VTX_INDEX_OUT_OF_BOUNDS, // binding in vkCmdBindVertexData() too large for PSO's pVertexBindingDescriptions array
DRAWSTATE_INVALID_DYNAMIC_STATE_OBJECT, // Invalid dyn state object
DRAWSTATE_MISSING_DOT_PROGRAM, // No "dot" program in order to generate png image
- DRAWSTATE_BINDING_DS_NO_END_UPDATE, // DS bound to CmdBuffer w/o call to xglEndDescriptorSetUpdate())
+ DRAWSTATE_BINDING_DS_NO_END_UPDATE, // DS bound to CmdBuffer w/o call to vkEndDescriptorSetUpdate())
DRAWSTATE_NO_DS_POOL, // No DS Pool is available
DRAWSTATE_OUT_OF_MEMORY, // malloc failed
DRAWSTATE_DESCRIPTOR_TYPE_MISMATCH, // Type in layout vs. update are not the same
typedef struct _SHADER_DS_MAPPING {
uint32_t slotCount;
- XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pShaderMappingSlot;
+ VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pShaderMappingSlot;
} SHADER_DS_MAPPING;
typedef struct _GENERIC_HEADER {
- XGL_STRUCTURE_TYPE sType;
+ VK_STRUCTURE_TYPE sType;
const void* pNext;
} GENERIC_HEADER;
typedef struct _PIPELINE_NODE {
- XGL_PIPELINE pipeline;
-
- XGL_GRAPHICS_PIPELINE_CREATE_INFO graphicsPipelineCI;
- XGL_PIPELINE_VERTEX_INPUT_CREATE_INFO vertexInputCI;
- XGL_PIPELINE_IA_STATE_CREATE_INFO iaStateCI;
- XGL_PIPELINE_TESS_STATE_CREATE_INFO tessStateCI;
- XGL_PIPELINE_VP_STATE_CREATE_INFO vpStateCI;
- XGL_PIPELINE_RS_STATE_CREATE_INFO rsStateCI;
- XGL_PIPELINE_MS_STATE_CREATE_INFO msStateCI;
- XGL_PIPELINE_CB_STATE_CREATE_INFO cbStateCI;
- XGL_PIPELINE_DS_STATE_CREATE_INFO dsStateCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO vsCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO tcsCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO tesCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO gsCI;
- XGL_PIPELINE_SHADER_STAGE_CREATE_INFO fsCI;
- // Compute shader is include in XGL_COMPUTE_PIPELINE_CREATE_INFO
- XGL_COMPUTE_PIPELINE_CREATE_INFO computePipelineCI;
-
- XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateTree; // Ptr to shadow of data in create tree
+ VK_PIPELINE pipeline;
+
+ VK_GRAPHICS_PIPELINE_CREATE_INFO graphicsPipelineCI;
+ VK_PIPELINE_VERTEX_INPUT_CREATE_INFO vertexInputCI;
+ VK_PIPELINE_IA_STATE_CREATE_INFO iaStateCI;
+ VK_PIPELINE_TESS_STATE_CREATE_INFO tessStateCI;
+ VK_PIPELINE_VP_STATE_CREATE_INFO vpStateCI;
+ VK_PIPELINE_RS_STATE_CREATE_INFO rsStateCI;
+ VK_PIPELINE_MS_STATE_CREATE_INFO msStateCI;
+ VK_PIPELINE_CB_STATE_CREATE_INFO cbStateCI;
+ VK_PIPELINE_DS_STATE_CREATE_INFO dsStateCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO vsCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO tcsCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO tesCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO gsCI;
+ VK_PIPELINE_SHADER_STAGE_CREATE_INFO fsCI;
+ // Compute shader is include in VK_COMPUTE_PIPELINE_CREATE_INFO
+ VK_COMPUTE_PIPELINE_CREATE_INFO computePipelineCI;
+
+ VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateTree; // Ptr to shadow of data in create tree
// Vtx input info (if any)
uint32_t vtxBindingCount; // number of bindings
- XGL_VERTEX_INPUT_BINDING_DESCRIPTION* pVertexBindingDescriptions;
+ VK_VERTEX_INPUT_BINDING_DESCRIPTION* pVertexBindingDescriptions;
uint32_t vtxAttributeCount; // number of attributes
- XGL_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* pVertexAttributeDescriptions;
+ VK_VERTEX_INPUT_ATTRIBUTE_DESCRIPTION* pVertexAttributeDescriptions;
uint32_t attachmentCount; // number of CB attachments
- XGL_PIPELINE_CB_ATTACHMENT_STATE* pAttachments;
+ VK_PIPELINE_CB_ATTACHMENT_STATE* pAttachments;
} PIPELINE_NODE;
typedef struct _SAMPLER_NODE {
- XGL_SAMPLER sampler;
- XGL_SAMPLER_CREATE_INFO createInfo;
+ VK_SAMPLER sampler;
+ VK_SAMPLER_CREATE_INFO createInfo;
} SAMPLER_NODE;
typedef struct _IMAGE_NODE {
- XGL_IMAGE_VIEW image;
- XGL_IMAGE_VIEW_CREATE_INFO createInfo;
- XGL_IMAGE_VIEW_ATTACH_INFO attachInfo;
+ VK_IMAGE_VIEW image;
+ VK_IMAGE_VIEW_CREATE_INFO createInfo;
+ VK_IMAGE_VIEW_ATTACH_INFO attachInfo;
} IMAGE_NODE;
typedef struct _BUFFER_NODE {
- XGL_BUFFER_VIEW buffer;
- XGL_BUFFER_VIEW_CREATE_INFO createInfo;
- XGL_BUFFER_VIEW_ATTACH_INFO attachInfo;
+ VK_BUFFER_VIEW buffer;
+ VK_BUFFER_VIEW_CREATE_INFO createInfo;
+ VK_BUFFER_VIEW_ATTACH_INFO attachInfo;
} BUFFER_NODE;
typedef struct _DYNAMIC_STATE_NODE {
- XGL_DYNAMIC_STATE_OBJECT stateObj;
+ VK_DYNAMIC_STATE_OBJECT stateObj;
GENERIC_HEADER* pCreateInfo;
union {
- XGL_DYNAMIC_VP_STATE_CREATE_INFO vpci;
- XGL_DYNAMIC_RS_STATE_CREATE_INFO rsci;
- XGL_DYNAMIC_CB_STATE_CREATE_INFO cbci;
- XGL_DYNAMIC_DS_STATE_CREATE_INFO dsci;
+ VK_DYNAMIC_VP_STATE_CREATE_INFO vpci;
+ VK_DYNAMIC_RS_STATE_CREATE_INFO rsci;
+ VK_DYNAMIC_CB_STATE_CREATE_INFO cbci;
+ VK_DYNAMIC_DS_STATE_CREATE_INFO dsci;
} create_info;
} DYNAMIC_STATE_NODE;
// Descriptor Data structures
// Layout Node has the core layout data
typedef struct _LAYOUT_NODE {
- XGL_DESCRIPTOR_SET_LAYOUT layout;
- XGL_DESCRIPTOR_TYPE* pTypes; // Dynamic array that will be created to verify descriptor types
- XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO createInfo;
+ VK_DESCRIPTOR_SET_LAYOUT layout;
+ VK_DESCRIPTOR_TYPE* pTypes; // Dynamic array that will be created to verify descriptor types
+ VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO createInfo;
uint32_t startIndex; // 1st index of this layout
uint32_t endIndex; // last index of this layout
} LAYOUT_NODE;
typedef struct _SET_NODE {
- XGL_DESCRIPTOR_SET set;
- XGL_DESCRIPTOR_POOL pool;
- XGL_DESCRIPTOR_SET_USAGE setUsage;
+ VK_DESCRIPTOR_SET set;
+ VK_DESCRIPTOR_POOL pool;
+ VK_DESCRIPTOR_SET_USAGE setUsage;
// Head of LL of all Update structs for this set
GENERIC_HEADER* pUpdateStructs;
// Total num of descriptors in this set (count of its layout plus all prior layouts)
} SET_NODE;
typedef struct _POOL_NODE {
- XGL_DESCRIPTOR_POOL pool;
- XGL_DESCRIPTOR_POOL_USAGE poolUsage;
+ VK_DESCRIPTOR_POOL pool;
+ VK_DESCRIPTOR_POOL_USAGE poolUsage;
uint32_t maxSets;
- XGL_DESCRIPTOR_POOL_CREATE_INFO createInfo;
+ VK_DESCRIPTOR_POOL_CREATE_INFO createInfo;
bool32_t updateActive; // Track if Pool is in an update block
SET_NODE* pSets; // Head of LL of sets for this Pool
} POOL_NODE;
} CB_STATE;
// Cmd Buffer Wrapper Struct
typedef struct _GLOBAL_CB_NODE {
- XGL_CMD_BUFFER cmdBuffer;
+ VK_CMD_BUFFER cmdBuffer;
uint32_t queueNodeIndex;
- XGL_FLAGS flags;
- XGL_FENCE fence; // fence tracking this cmd buffer
+ VK_FLAGS flags;
+ VK_FENCE fence; // fence tracking this cmd buffer
uint64_t numCmds; // number of cmds in this CB
uint64_t drawCount[NUM_DRAW_TYPES]; // Count of each type of draw in this CB
CB_STATE state; // Track if cmd buffer update status
// Currently storing "lastBound" objects on per-CB basis
// long-term may want to create caches of "lastBound" states and could have
// each individual CMD_NODE referencing its own "lastBound" state
- XGL_PIPELINE lastBoundPipeline;
+ VK_PIPELINE lastBoundPipeline;
uint32_t lastVtxBinding;
- DYNAMIC_STATE_NODE* lastBoundDynamicState[XGL_NUM_STATE_BIND_POINT];
- XGL_DESCRIPTOR_SET lastBoundDescriptorSet;
- XGL_RENDER_PASS activeRenderPass;
- XGL_FRAMEBUFFER framebuffer;
+ DYNAMIC_STATE_NODE* lastBoundDynamicState[VK_NUM_STATE_BIND_POINT];
+ VK_DESCRIPTOR_SET lastBoundDescriptorSet;
+ VK_RENDER_PASS activeRenderPass;
+ VK_FRAMEBUFFER framebuffer;
} GLOBAL_CB_NODE;
//prototypes for extension functions
#include <string.h>
#include "loader_platform.h"
#include "glave_snapshot.h"
-#include "xgl_struct_string_helper.h"
+#include "vk_struct_string_helper.h"
#define LAYER_NAME_STR "GlaveSnapshot"
#define LAYER_ABBREV_STR "GLVSnap"
-static XGL_LAYER_DISPATCH_TABLE nextTable;
-static XGL_BASE_LAYER_OBJECT *pCurObj;
+static VK_LAYER_DISPATCH_TABLE nextTable;
+static VK_BASE_LAYER_OBJECT *pCurObj;
// The following is #included again to catch certain OS-specific functions being used:
#include "loader_platform.h"
memcpy(*ppDest, pSrc, size);
}
-XGL_DEVICE_CREATE_INFO* glv_deepcopy_xgl_device_create_info(const XGL_DEVICE_CREATE_INFO* pSrcCreateInfo)
+VK_DEVICE_CREATE_INFO* glv_deepcopy_VK_DEVICE_CREATE_INFO(const VK_DEVICE_CREATE_INFO* pSrcCreateInfo)
{
- XGL_DEVICE_CREATE_INFO* pDestCreateInfo;
+ VK_DEVICE_CREATE_INFO* pDestCreateInfo;
- // NOTE: partially duplicated code from add_XGL_DEVICE_CREATE_INFO_to_packet(...)
+ // NOTE: partially duplicated code from add_VK_DEVICE_CREATE_INFO_to_packet(...)
{
uint32_t i;
- glv_vk_malloc_and_copy((void**)&pDestCreateInfo, sizeof(XGL_DEVICE_CREATE_INFO), pSrcCreateInfo);
- glv_vk_malloc_and_copy((void**)&pDestCreateInfo->pRequestedQueues, pSrcCreateInfo->queueRecordCount*sizeof(XGL_DEVICE_QUEUE_CREATE_INFO), pSrcCreateInfo->pRequestedQueues);
+ glv_vk_malloc_and_copy((void**)&pDestCreateInfo, sizeof(VK_DEVICE_CREATE_INFO), pSrcCreateInfo);
+ glv_vk_malloc_and_copy((void**)&pDestCreateInfo->pRequestedQueues, pSrcCreateInfo->queueRecordCount*sizeof(VK_DEVICE_QUEUE_CREATE_INFO), pSrcCreateInfo->pRequestedQueues);
if (pSrcCreateInfo->extensionCount > 0)
{
glv_vk_malloc_and_copy((void**)&pDestCreateInfo->ppEnabledExtensionNames[i], strlen(pSrcCreateInfo->ppEnabledExtensionNames[i]) + 1, pSrcCreateInfo->ppEnabledExtensionNames[i]);
}
}
- XGL_LAYER_CREATE_INFO *pSrcNext = ( XGL_LAYER_CREATE_INFO *) pSrcCreateInfo->pNext;
- XGL_LAYER_CREATE_INFO **ppDstNext = ( XGL_LAYER_CREATE_INFO **) &pDestCreateInfo->pNext;
+ VK_LAYER_CREATE_INFO *pSrcNext = ( VK_LAYER_CREATE_INFO *) pSrcCreateInfo->pNext;
+ VK_LAYER_CREATE_INFO **ppDstNext = ( VK_LAYER_CREATE_INFO **) &pDestCreateInfo->pNext;
while (pSrcNext != NULL)
{
- if ((pSrcNext->sType == XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO) && pSrcNext->layerCount > 0)
+ if ((pSrcNext->sType == VK_STRUCTURE_TYPE_LAYER_CREATE_INFO) && pSrcNext->layerCount > 0)
{
- glv_vk_malloc_and_copy((void**)ppDstNext, sizeof(XGL_LAYER_CREATE_INFO), pSrcNext);
+ glv_vk_malloc_and_copy((void**)ppDstNext, sizeof(VK_LAYER_CREATE_INFO), pSrcNext);
glv_vk_malloc_and_copy((void**)&(*ppDstNext)->ppActiveLayerNames, pSrcNext->layerCount * sizeof(char*), pSrcNext->ppActiveLayerNames);
for (i = 0; i < pSrcNext->layerCount; i++)
{
glv_vk_malloc_and_copy((void**)&(*ppDstNext)->ppActiveLayerNames[i], strlen(pSrcNext->ppActiveLayerNames[i]) + 1, pSrcNext->ppActiveLayerNames[i]);
}
- ppDstNext = (XGL_LAYER_CREATE_INFO**) &(*ppDstNext)->pNext;
+ ppDstNext = (VK_LAYER_CREATE_INFO**) &(*ppDstNext)->pNext;
}
- pSrcNext = (XGL_LAYER_CREATE_INFO*) pSrcNext->pNext;
+ pSrcNext = (VK_LAYER_CREATE_INFO*) pSrcNext->pNext;
}
}
return pDestCreateInfo;
}
-void glv_deepfree_xgl_device_create_info(XGL_DEVICE_CREATE_INFO* pCreateInfo)
+void glv_deepfree_VK_DEVICE_CREATE_INFO(VK_DEVICE_CREATE_INFO* pCreateInfo)
{
uint32_t i;
if (pCreateInfo->pRequestedQueues != NULL)
free((void*)pCreateInfo->ppEnabledExtensionNames);
}
- XGL_LAYER_CREATE_INFO *pSrcNext = (XGL_LAYER_CREATE_INFO*)pCreateInfo->pNext;
+ VK_LAYER_CREATE_INFO *pSrcNext = (VK_LAYER_CREATE_INFO*)pCreateInfo->pNext;
while (pSrcNext != NULL)
{
- XGL_LAYER_CREATE_INFO* pTmp = (XGL_LAYER_CREATE_INFO*)pSrcNext->pNext;
- if ((pSrcNext->sType == XGL_STRUCTURE_TYPE_LAYER_CREATE_INFO) && pSrcNext->layerCount > 0)
+ VK_LAYER_CREATE_INFO* pTmp = (VK_LAYER_CREATE_INFO*)pSrcNext->pNext;
+ if ((pSrcNext->sType == VK_STRUCTURE_TYPE_LAYER_CREATE_INFO) && pSrcNext->layerCount > 0)
{
for (i = 0; i < pSrcNext->layerCount; i++)
{
free(pCreateInfo);
}
-void glv_vk_snapshot_copy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pDest, XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+void glv_vk_snapshot_copy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pDest, VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
pDest->gpu = gpu;
- pDest->pCreateInfo = glv_deepcopy_xgl_device_create_info(pCreateInfo);
+ pDest->pCreateInfo = glv_deepcopy_VK_DEVICE_CREATE_INFO(pCreateInfo);
- pDest->pDevice = (XGL_DEVICE*)malloc(sizeof(XGL_DEVICE));
+ pDest->pDevice = (VK_DEVICE*)malloc(sizeof(VK_DEVICE));
*pDest->pDevice = *pDevice;
}
void glv_vk_snapshot_destroy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pSrc)
{
- memset(&pSrc->gpu, 0, sizeof(XGL_PHYSICAL_GPU));
+ memset(&pSrc->gpu, 0, sizeof(VK_PHYSICAL_GPU));
- glv_deepfree_xgl_device_create_info(pSrc->pCreateInfo);
+ glv_deepfree_VK_DEVICE_CREATE_INFO(pSrc->pCreateInfo);
pSrc->pCreateInfo = NULL;
free(pSrc->pDevice);
// add a new node to the global and object lists, then return it so the caller can populate the object information.
-static GLV_VK_SNAPSHOT_LL_NODE* snapshot_insert_object(GLV_VK_SNAPSHOT* pSnapshot, void* pObject, XGL_OBJECT_TYPE type)
+static GLV_VK_SNAPSHOT_LL_NODE* snapshot_insert_object(GLV_VK_SNAPSHOT* pSnapshot, void* pObject, VK_OBJECT_TYPE type)
{
// Create a new node
GLV_VK_SNAPSHOT_LL_NODE* pNewObjNode = (GLV_VK_SNAPSHOT_LL_NODE*)malloc(sizeof(GLV_VK_SNAPSHOT_LL_NODE));
}
// This is just a helper function to snapshot_remove_object(..). It is not intended for this to be called directly.
-static void snapshot_remove_obj_type(GLV_VK_SNAPSHOT* pSnapshot, void* pObj, XGL_OBJECT_TYPE objType) {
+static void snapshot_remove_obj_type(GLV_VK_SNAPSHOT* pSnapshot, void* pObj, VK_OBJECT_TYPE objType) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = pSnapshot->pObjectHead[objType];
GLV_VK_SNAPSHOT_LL_NODE *pPrev = pSnapshot->pObjectHead[objType];
while (pTrav) {
pTrav = pTrav->pNextObj;
}
char str[1024];
- sprintf(str, "OBJ INTERNAL ERROR : Obj %p was in global list but not in %s list", pObj, string_XGL_OBJECT_TYPE(objType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_INTERNAL_ERROR, LAYER_ABBREV_STR, str);
+ sprintf(str, "OBJ INTERNAL ERROR : Obj %p was in global list but not in %s list", pObj, string_VK_OBJECT_TYPE(objType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_INTERNAL_ERROR, LAYER_ABBREV_STR, str);
}
// Search global list to find object,
// Object not found.
char str[1024];
sprintf(str, "Object %p was not found in the created object list. It should be added as a deleted object.", pObject);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObject, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObject, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
return NULL;
}
// Add a new deleted object node to the list
-static void snapshot_insert_deleted_object(GLV_VK_SNAPSHOT* pSnapshot, void* pObject, XGL_OBJECT_TYPE type)
+static void snapshot_insert_deleted_object(GLV_VK_SNAPSHOT* pSnapshot, void* pObject, VK_OBJECT_TYPE type)
{
// Create a new node
GLV_VK_SNAPSHOT_DELETED_OBJ_NODE* pNewObjNode = (GLV_VK_SNAPSHOT_DELETED_OBJ_NODE*)malloc(sizeof(GLV_VK_SNAPSHOT_DELETED_OBJ_NODE));
}
// Note: the parameters after pSnapshot match the order of vkCreateDevice(..)
-static void snapshot_insert_device(GLV_VK_SNAPSHOT* pSnapshot, XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+static void snapshot_insert_device(GLV_VK_SNAPSHOT* pSnapshot, VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(pSnapshot, *pDevice, XGL_OBJECT_TYPE_DEVICE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(pSnapshot, *pDevice, VK_OBJECT_TYPE_DEVICE);
pNode->obj.pStruct = malloc(sizeof(GLV_VK_SNAPSHOT_DEVICE_NODE));
GLV_VK_SNAPSHOT_DEVICE_NODE* pDevNode = (GLV_VK_SNAPSHOT_DEVICE_NODE*)pNode->obj.pStruct;
pSnapshot->deviceCount++;
}
-static void snapshot_remove_device(GLV_VK_SNAPSHOT* pSnapshot, XGL_DEVICE device)
+static void snapshot_remove_device(GLV_VK_SNAPSHOT* pSnapshot, VK_DEVICE device)
{
GLV_VK_SNAPSHOT_LL_NODE* pFoundObject = snapshot_remove_object(pSnapshot, device);
// If the code got here, then the device wasn't in the devices list.
// That means we should add this device to the deleted items list.
- snapshot_insert_deleted_object(&s_delta, device, XGL_OBJECT_TYPE_DEVICE);
+ snapshot_insert_deleted_object(&s_delta, device, VK_OBJECT_TYPE_DEVICE);
}
// Traverse global list and return type for given object
-static XGL_OBJECT_TYPE ll_get_obj_type(XGL_OBJECT object) {
+static VK_OBJECT_TYPE ll_get_obj_type(VK_OBJECT object) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pGlobalObjs;
while (pTrav) {
if (pTrav->obj.pVkObject == object)
}
char str[1024];
sprintf(str, "Attempting look-up on obj %p but it is NOT in the global list!", (void*)object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, GLVSNAPSHOT_MISSING_OBJECT, LAYER_ABBREV_STR, str);
- return XGL_OBJECT_TYPE_UNKNOWN;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, GLVSNAPSHOT_MISSING_OBJECT, LAYER_ABBREV_STR, str);
+ return VK_OBJECT_TYPE_UNKNOWN;
}
-static void ll_increment_use_count(void* pObj, XGL_OBJECT_TYPE objType) {
+static void ll_increment_use_count(void* pObj, VK_OBJECT_TYPE objType) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[objType];
while (pTrav) {
if (pTrav->obj.pVkObject == pObj) {
// Instead, we need to make a list of referenced objects. When the delta is merged with a snapshot, we'll need
// to confirm that the referenced objects actually exist in the snapshot; otherwise I guess the merge should fail.
char str[1024];
- sprintf(str, "Unable to increment count for obj %p, will add to list as %s type and increment count", pObj, string_XGL_OBJECT_TYPE(objType));
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ sprintf(str, "Unable to increment count for obj %p, will add to list as %s type and increment count", pObj, string_VK_OBJECT_TYPE(objType));
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
// ll_insert_obj(pObj, objType);
// ll_increment_use_count(pObj, objType);
}
// Set selected flag state for an object node
-static void set_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {
+static void set_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {
if (pObj != NULL) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[objType];
while (pTrav) {
// If we do not find it print an error
char str[1024];
- sprintf(str, "Unable to set status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ sprintf(str, "Unable to set status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
}
}
// Track selected state for an object node
-static void track_object_status(void* pObj, XGL_STATE_BIND_POINT stateBindPoint) {
- GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[XGL_OBJECT_TYPE_CMD_BUFFER];
+static void track_object_status(void* pObj, VK_STATE_BIND_POINT stateBindPoint) {
+ GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[VK_OBJECT_TYPE_CMD_BUFFER];
while (pTrav) {
if (pTrav->obj.pVkObject == pObj) {
- if (stateBindPoint == XGL_STATE_BIND_VIEWPORT) {
+ if (stateBindPoint == VK_STATE_BIND_VIEWPORT) {
pTrav->obj.status |= OBJSTATUS_VIEWPORT_BOUND;
- } else if (stateBindPoint == XGL_STATE_BIND_RASTER) {
+ } else if (stateBindPoint == VK_STATE_BIND_RASTER) {
pTrav->obj.status |= OBJSTATUS_RASTER_BOUND;
- } else if (stateBindPoint == XGL_STATE_BIND_COLOR_BLEND) {
+ } else if (stateBindPoint == VK_STATE_BIND_COLOR_BLEND) {
pTrav->obj.status |= OBJSTATUS_COLOR_BLEND_BOUND;
- } else if (stateBindPoint == XGL_STATE_BIND_DEPTH_STENCIL) {
+ } else if (stateBindPoint == VK_STATE_BIND_DEPTH_STENCIL) {
pTrav->obj.status |= OBJSTATUS_DEPTH_STENCIL_BOUND;
}
return;
// If we do not find it print an error
char str[1024];
sprintf(str, "Unable to track status for non-existent Command Buffer object %p", pObj);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
}
// Reset selected flag state for an object node
-static void reset_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {
+static void reset_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pObjectHead[objType];
while (pTrav) {
if (pTrav->obj.pVkObject == pObj) {
// If we do not find it print an error
char str[1024];
- sprintf(str, "Unable to reset status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
+ sprintf(str, "Unable to reset status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, GLVSNAPSHOT_UNKNOWN_OBJECT, LAYER_ABBREV_STR, str);
}
-#include "xgl_dispatch_table_helper.h"
+#include "vk_dispatch_table_helper.h"
static void initGlaveSnapshot(void)
{
const char *strOpt;
getLayerOptionEnum(LAYER_NAME_STR "ReportLevel", (uint32_t *) &g_reportingLevel);
g_actionIsDefault = getLayerOptionEnum(LAYER_NAME_STR "DebugAction", (uint32_t *) &g_debugAction);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
{
strOpt = getLayerOption(LAYER_NAME_STR "LogFilename");
if (strOpt)
g_logFile = stdout;
}
- xglGetProcAddrType fpNextGPA;
+ vkGetProcAddrType fpNextGPA;
fpNextGPA = pCurObj->pGPA;
assert(fpNextGPA);
- layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);
+ layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);
if (!objLockInitialized)
{
// TODO/TBD: Need to delete this mutex sometime. How???
//=============================================================================
// vulkan entrypoints
//=============================================================================
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(const XGL_INSTANCE_CREATE_INFO* pCreateInfo, XGL_INSTANCE* pInstance)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateInstance(const VK_INSTANCE_CREATE_INFO* pCreateInfo, VK_INSTANCE* pInstance)
{
- XGL_RESULT result = nextTable.CreateInstance(pCreateInfo, pInstance);
+ VK_RESULT result = nextTable.CreateInstance(pCreateInfo, pInstance);
loader_platform_thread_lock_mutex(&objLock);
- snapshot_insert_object(&s_delta, *pInstance, XGL_OBJECT_TYPE_INSTANCE);
+ snapshot_insert_object(&s_delta, *pInstance, VK_OBJECT_TYPE_INSTANCE);
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(XGL_INSTANCE instance)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyInstance(VK_INSTANCE instance)
{
- XGL_RESULT result = nextTable.DestroyInstance(instance);
+ VK_RESULT result = nextTable.DestroyInstance(instance);
loader_platform_thread_lock_mutex(&objLock);
snapshot_remove_object(&s_delta, (void*)instance);
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(XGL_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, XGL_PHYSICAL_GPU* pGpus)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateGpus(VK_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, VK_PHYSICAL_GPU* pGpus)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)instance, XGL_OBJECT_TYPE_INSTANCE);
+ ll_increment_use_count((void*)instance, VK_OBJECT_TYPE_INSTANCE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.EnumerateGpus(instance, maxGpus, pGpuCount, pGpus);
+ VK_RESULT result = nextTable.EnumerateGpus(instance, maxGpus, pGpuCount, pGpus);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetGpuInfo(XGL_PHYSICAL_GPU gpu, XGL_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetGpuInfo(VK_PHYSICAL_GPU gpu, VK_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.GetGpuInfo((XGL_PHYSICAL_GPU)gpuw->nextObject, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetGpuInfo((VK_PHYSICAL_GPU)gpuw->nextObject, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
snapshot_insert_device(&s_delta, gpu, pCreateInfo, pDevice);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyDevice(VK_DEVICE device)
{
- XGL_RESULT result = nextTable.DestroyDevice(device);
+ VK_RESULT result = nextTable.DestroyDevice(device);
loader_platform_thread_lock_mutex(&objLock);
snapshot_remove_device(&s_delta, device);
loader_platform_thread_unlock_mutex(&objLock);
GLV_VK_SNAPSHOT_LL_NODE *pTrav = s_delta.pGlobalObjs;
while (pTrav != NULL)
{
- if (pTrav->obj.objType == XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY)
+ if (pTrav->obj.objType == VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY)
{
GLV_VK_SNAPSHOT_LL_NODE *pDel = pTrav;
pTrav = pTrav->pNextGlobal;
snapshot_remove_object(&s_delta, (void*)(pDel->obj.pVkObject));
} else {
char str[1024];
- sprintf(str, "OBJ ERROR : %s object %p has not been destroyed (was used %lu times).", string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pVkObject, pTrav->obj.numUses);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, device, 0, GLVSNAPSHOT_OBJECT_LEAK, LAYER_ABBREV_STR, str);
+ sprintf(str, "OBJ ERROR : %s object %p has not been destroyed (was used %lu times).", string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pVkObject, pTrav->obj.numUses);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, device, 0, GLVSNAPSHOT_OBJECT_LEAK, LAYER_ABBREV_STR, str);
pTrav = pTrav->pNextGlobal;
}
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)gpu, XGL_OBJECT_TYPE_PHYSICAL_GPU);
+ ll_increment_use_count((void*)gpu, VK_OBJECT_TYPE_PHYSICAL_GPU);
loader_platform_thread_unlock_mutex(&objLock);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ VK_RESULT result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
if (gpu != NULL) {
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)gpu, XGL_OBJECT_TYPE_PHYSICAL_GPU);
+ ll_increment_use_count((void*)gpu, VK_OBJECT_TYPE_PHYSICAL_GPU);
loader_platform_thread_unlock_mutex(&objLock);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ VK_RESULT result = nextTable.EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
return result;
} else {
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// This layer compatible with all GPUs
*pOutLayerCount = 1;
strncpy((char *) pOutLayers[0], LAYER_NAME_STR, maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(XGL_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, XGL_QUEUE* pQueue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(VK_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, VK_QUEUE* pQueue)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
+ VK_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(XGL_QUEUE queue, uint32_t cmdBufferCount, const XGL_CMD_BUFFER* pCmdBuffers, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSubmit(VK_QUEUE queue, uint32_t cmdBufferCount, const VK_CMD_BUFFER* pCmdBuffers, VK_FENCE fence)
{
- set_status((void*)fence, XGL_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED);
- XGL_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
+ set_status((void*)fence, VK_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED);
+ VK_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(XGL_QUEUE queue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(VK_QUEUE queue)
{
- XGL_RESULT result = nextTable.QueueWaitIdle(queue);
+ VK_RESULT result = nextTable.QueueWaitIdle(queue);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(VK_DEVICE device)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DeviceWaitIdle(device);
+ VK_RESULT result = nextTable.DeviceWaitIdle(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocMemory(XGL_DEVICE device, const XGL_MEMORY_ALLOC_INFO* pAllocInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocMemory(VK_DEVICE device, const VK_MEMORY_ALLOC_INFO* pAllocInfo, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pMem, XGL_OBJECT_TYPE_GPU_MEMORY);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pMem, VK_OBJECT_TYPE_GPU_MEMORY);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglFreeMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkFreeMemory(VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.FreeMemory(mem);
+ VK_RESULT result = nextTable.FreeMemory(mem);
loader_platform_thread_lock_mutex(&objLock);
snapshot_remove_object(&s_delta, (void*)mem);
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(XGL_GPU_MEMORY mem, XGL_MEMORY_PRIORITY priority)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(VK_GPU_MEMORY mem, VK_MEMORY_PRIORITY priority)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY);
+ ll_increment_use_count((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.SetMemoryPriority(mem, priority);
+ VK_RESULT result = nextTable.SetMemoryPriority(mem, priority);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglMapMemory(XGL_GPU_MEMORY mem, XGL_FLAGS flags, void** ppData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkMapMemory(VK_GPU_MEMORY mem, VK_FLAGS flags, void** ppData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY);
+ ll_increment_use_count((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY);
loader_platform_thread_unlock_mutex(&objLock);
- set_status((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);
- XGL_RESULT result = nextTable.MapMemory(mem, flags, ppData);
+ set_status((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);
+ VK_RESULT result = nextTable.MapMemory(mem, flags, ppData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkUnmapMemory(VK_GPU_MEMORY mem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY);
+ ll_increment_use_count((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY);
loader_platform_thread_unlock_mutex(&objLock);
- reset_status((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);
- XGL_RESULT result = nextTable.UnmapMemory(mem);
+ reset_status((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);
+ VK_RESULT result = nextTable.UnmapMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(XGL_DEVICE device, const void* pSysMem, size_t memSize, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkPinSystemMemory(VK_DEVICE device, const void* pSysMem, size_t memSize, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
+ VK_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(XGL_PHYSICAL_GPU gpu0, XGL_PHYSICAL_GPU gpu1, XGL_GPU_COMPATIBILITY_INFO* pInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetMultiGpuCompatibility(VK_PHYSICAL_GPU gpu0, VK_PHYSICAL_GPU gpu1, VK_GPU_COMPATIBILITY_INFO* pInfo)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu0;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu0;
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)gpu0, XGL_OBJECT_TYPE_PHYSICAL_GPU);
+ ll_increment_use_count((void*)gpu0, VK_OBJECT_TYPE_PHYSICAL_GPU);
loader_platform_thread_unlock_mutex(&objLock);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.GetMultiGpuCompatibility((XGL_PHYSICAL_GPU)gpuw->nextObject, gpu1, pInfo);
+ VK_RESULT result = nextTable.GetMultiGpuCompatibility((VK_PHYSICAL_GPU)gpuw->nextObject, gpu1, pInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(XGL_DEVICE device, const XGL_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(VK_DEVICE device, const VK_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedSemaphore(XGL_DEVICE device, const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo, XGL_SEMAPHORE* pSemaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedSemaphore(VK_DEVICE device, const VK_SEMAPHORE_OPEN_INFO* pOpenInfo, VK_SEMAPHORE* pSemaphore)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.OpenSharedSemaphore(device, pOpenInfo, pSemaphore);
+ VK_RESULT result = nextTable.OpenSharedSemaphore(device, pOpenInfo, pSemaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(XGL_DEVICE device, const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(VK_DEVICE device, const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(XGL_DEVICE device, const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerImage(VK_DEVICE device, const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
+ VK_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyObject(XGL_OBJECT object)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyObject(VK_OBJECT object)
{
- XGL_RESULT result = nextTable.DestroyObject(object);
+ VK_RESULT result = nextTable.DestroyObject(object);
loader_platform_thread_lock_mutex(&objLock);
snapshot_remove_object(&s_delta, (void*)object);
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(XGL_BASE_OBJECT object, XGL_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetObjectInfo(VK_BASE_OBJECT object, VK_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
ll_increment_use_count((void*)object, ll_get_obj_type(object));
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_MEMORY mem, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemory(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_MEMORY mem, VK_GPU_SIZE offset)
{
loader_platform_thread_lock_mutex(&objLock);
ll_increment_use_count((void*)object, ll_get_obj_type(object));
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
+ VK_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemoryRange(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_SIZE rangeOffset, XGL_GPU_SIZE rangeSize, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemoryRange(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_SIZE rangeOffset, VK_GPU_SIZE rangeSize, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset)
{
loader_platform_thread_lock_mutex(&objLock);
ll_increment_use_count((void*)object, ll_get_obj_type(object));
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BindObjectMemoryRange(object, allocationIdx, rangeOffset, rangeSize, mem, memOffset);
+ VK_RESULT result = nextTable.BindObjectMemoryRange(object, allocationIdx, rangeOffset, rangeSize, mem, memOffset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindImageMemoryRange(XGL_IMAGE image, uint32_t allocationIdx, const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindImageMemoryRange(VK_IMAGE image, uint32_t allocationIdx, const VK_IMAGE_MEMORY_BIND_INFO* bindInfo, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)image, XGL_OBJECT_TYPE_IMAGE);
+ ll_increment_use_count((void*)image, VK_OBJECT_TYPE_IMAGE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BindImageMemoryRange(image, allocationIdx, bindInfo, mem, memOffset);
+ VK_RESULT result = nextTable.BindImageMemoryRange(image, allocationIdx, bindInfo, mem, memOffset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFence(XGL_DEVICE device, const XGL_FENCE_CREATE_INFO* pCreateInfo, XGL_FENCE* pFence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFence(VK_DEVICE device, const VK_FENCE_CREATE_INFO* pCreateInfo, VK_FENCE* pFence)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pFence, XGL_OBJECT_TYPE_FENCE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pFence, VK_OBJECT_TYPE_FENCE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFenceStatus(VK_FENCE fence)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)fence, XGL_OBJECT_TYPE_FENCE);
+ ll_increment_use_count((void*)fence, VK_OBJECT_TYPE_FENCE);
loader_platform_thread_unlock_mutex(&objLock);
// Warn if submitted_flag is not set
- XGL_RESULT result = nextTable.GetFenceStatus(fence);
+ VK_RESULT result = nextTable.GetFenceStatus(fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWaitForFences(XGL_DEVICE device, uint32_t fenceCount, const XGL_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWaitForFences(VK_DEVICE device, uint32_t fenceCount, const VK_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+ VK_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSemaphore(XGL_DEVICE device, const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo, XGL_SEMAPHORE* pSemaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSemaphore(VK_DEVICE device, const VK_SEMAPHORE_CREATE_INFO* pCreateInfo, VK_SEMAPHORE* pSemaphore)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateSemaphore(device, pCreateInfo, pSemaphore);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateSemaphore(device, pCreateInfo, pSemaphore);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSemaphore, XGL_OBJECT_TYPE_QUEUE_SEMAPHORE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSemaphore, VK_OBJECT_TYPE_QUEUE_SEMAPHORE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSignalSemaphore(XGL_QUEUE queue, XGL_SEMAPHORE semaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSignalSemaphore(VK_QUEUE queue, VK_SEMAPHORE semaphore)
{
- XGL_RESULT result = nextTable.QueueSignalSemaphore(queue, semaphore);
+ VK_RESULT result = nextTable.QueueSignalSemaphore(queue, semaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitSemaphore(XGL_QUEUE queue, XGL_SEMAPHORE semaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitSemaphore(VK_QUEUE queue, VK_SEMAPHORE semaphore)
{
- XGL_RESULT result = nextTable.QueueWaitSemaphore(queue, semaphore);
+ VK_RESULT result = nextTable.QueueWaitSemaphore(queue, semaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateEvent(XGL_DEVICE device, const XGL_EVENT_CREATE_INFO* pCreateInfo, XGL_EVENT* pEvent)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateEvent(VK_DEVICE device, const VK_EVENT_CREATE_INFO* pCreateInfo, VK_EVENT* pEvent)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pEvent, XGL_OBJECT_TYPE_EVENT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pEvent, VK_OBJECT_TYPE_EVENT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetEventStatus(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetEventStatus(VK_EVENT event)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)event, XGL_OBJECT_TYPE_EVENT);
+ ll_increment_use_count((void*)event, VK_OBJECT_TYPE_EVENT);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetEventStatus(event);
+ VK_RESULT result = nextTable.GetEventStatus(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetEvent(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetEvent(VK_EVENT event)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)event, XGL_OBJECT_TYPE_EVENT);
+ ll_increment_use_count((void*)event, VK_OBJECT_TYPE_EVENT);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.SetEvent(event);
+ VK_RESULT result = nextTable.SetEvent(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetEvent(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetEvent(VK_EVENT event)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)event, XGL_OBJECT_TYPE_EVENT);
+ ll_increment_use_count((void*)event, VK_OBJECT_TYPE_EVENT);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.ResetEvent(event);
+ VK_RESULT result = nextTable.ResetEvent(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(XGL_DEVICE device, const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo, XGL_QUERY_POOL* pQueryPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateQueryPool(VK_DEVICE device, const VK_QUERY_POOL_CREATE_INFO* pCreateInfo, VK_QUERY_POOL* pQueryPool)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pQueryPool, XGL_OBJECT_TYPE_QUERY_POOL);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pQueryPool, VK_OBJECT_TYPE_QUERY_POOL);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetQueryPoolResults(XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetQueryPoolResults(VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)queryPool, XGL_OBJECT_TYPE_QUERY_POOL);
+ ll_increment_use_count((void*)queryPool, VK_OBJECT_TYPE_QUERY_POOL);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetQueryPoolResults(queryPool, startQuery, queryCount, pDataSize, pData);
+ VK_RESULT result = nextTable.GetQueryPoolResults(queryPool, startQuery, queryCount, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(XGL_DEVICE device, XGL_FORMAT format, XGL_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFormatInfo(VK_DEVICE device, VK_FORMAT format, VK_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetFormatInfo(device, format, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetFormatInfo(device, format, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(XGL_DEVICE device, const XGL_BUFFER_CREATE_INFO* pCreateInfo, XGL_BUFFER* pBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBuffer(VK_DEVICE device, const VK_BUFFER_CREATE_INFO* pCreateInfo, VK_BUFFER* pBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pBuffer, XGL_OBJECT_TYPE_BUFFER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pBuffer, VK_OBJECT_TYPE_BUFFER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBufferView(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, XGL_OBJECT_TYPE_BUFFER_VIEW);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, VK_OBJECT_TYPE_BUFFER_VIEW);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImage(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImage(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pImage, XGL_OBJECT_TYPE_IMAGE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pImage, VK_OBJECT_TYPE_IMAGE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetImageSubresourceInfo(XGL_IMAGE image, const XGL_IMAGE_SUBRESOURCE* pSubresource, XGL_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetImageSubresourceInfo(VK_IMAGE image, const VK_IMAGE_SUBRESOURCE* pSubresource, VK_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)image, XGL_OBJECT_TYPE_IMAGE);
+ ll_increment_use_count((void*)image, VK_OBJECT_TYPE_IMAGE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.GetImageSubresourceInfo(image, pSubresource, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetImageSubresourceInfo(image, pSubresource, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImageView(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImageView(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, XGL_OBJECT_TYPE_IMAGE_VIEW);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, VK_OBJECT_TYPE_IMAGE_VIEW);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(XGL_DEVICE device, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, XGL_COLOR_ATTACHMENT_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(VK_DEVICE device, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, VK_COLOR_ATTACHMENT_VIEW* pView)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(XGL_DEVICE device, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, XGL_DEPTH_STENCIL_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(VK_DEVICE device, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, VK_DEPTH_STENCIL_VIEW* pView)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pView, VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateShader(XGL_DEVICE device, const XGL_SHADER_CREATE_INFO* pCreateInfo, XGL_SHADER* pShader)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateShader(VK_DEVICE device, const VK_SHADER_CREATE_INFO* pCreateInfo, VK_SHADER* pShader)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pShader, XGL_OBJECT_TYPE_SHADER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pShader, VK_OBJECT_TYPE_SHADER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pPipeline, XGL_OBJECT_TYPE_PIPELINE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pPipeline, VK_OBJECT_TYPE_PIPELINE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(XGL_DEVICE device, const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(VK_DEVICE device, const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pPipeline, XGL_OBJECT_TYPE_PIPELINE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pPipeline, VK_OBJECT_TYPE_PIPELINE);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglStorePipeline(XGL_PIPELINE pipeline, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkStorePipeline(VK_PIPELINE pipeline, size_t* pDataSize, void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)pipeline, XGL_OBJECT_TYPE_PIPELINE);
+ ll_increment_use_count((void*)pipeline, VK_OBJECT_TYPE_PIPELINE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.StorePipeline(pipeline, pDataSize, pData);
+ VK_RESULT result = nextTable.StorePipeline(pipeline, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglLoadPipeline(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkLoadPipeline(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE* pPipeline)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.LoadPipeline(device, dataSize, pData, pPipeline);
+ VK_RESULT result = nextTable.LoadPipeline(device, dataSize, pData, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSampler(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSampler(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSampler, XGL_OBJECT_TYPE_SAMPLER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSampler, VK_OBJECT_TYPE_SAMPLER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout( XGL_DEVICE device, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout( VK_DEVICE device, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSetLayout, XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pSetLayout, VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(XGL_DEVICE device, XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(VK_DEVICE device, VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
+ VK_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(XGL_DEVICE device, XGL_CMD_BUFFER cmd)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(VK_DEVICE device, VK_CMD_BUFFER cmd)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
+ VK_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(XGL_DEVICE device, XGL_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_POOL* pDescriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(VK_DEVICE device, VK_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_POOL* pDescriptorPool)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pDescriptorPool, XGL_OBJECT_TYPE_DESCRIPTOR_POOL);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pDescriptorPool, VK_OBJECT_TYPE_DESCRIPTOR_POOL);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(XGL_DESCRIPTOR_POOL descriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(VK_DESCRIPTOR_POOL descriptorPool)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)descriptorPool, XGL_OBJECT_TYPE_DESCRIPTOR_POOL);
+ ll_increment_use_count((void*)descriptorPool, VK_OBJECT_TYPE_DESCRIPTOR_POOL);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
+ VK_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, XGL_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts, XGL_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, VK_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts, VK_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)descriptorPool, XGL_OBJECT_TYPE_DESCRIPTOR_POOL);
+ ll_increment_use_count((void*)descriptorPool, VK_OBJECT_TYPE_DESCRIPTOR_POOL);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
+ if (result == VK_SUCCESS)
{
for (uint32_t i = 0; i < *pCount; i++) {
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, pDescriptorSets[i], XGL_OBJECT_TYPE_DESCRIPTOR_SET);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, pDescriptorSets[i], VK_OBJECT_TYPE_DESCRIPTOR_SET);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglClearDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets)
+VK_LAYER_EXPORT void VKAPI vkClearDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)descriptorPool, XGL_OBJECT_TYPE_DESCRIPTOR_POOL);
+ ll_increment_use_count((void*)descriptorPool, VK_OBJECT_TYPE_DESCRIPTOR_POOL);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.ClearDescriptorSets(descriptorPool, count, pDescriptorSets);
}
-XGL_LAYER_EXPORT void XGLAPI xglUpdateDescriptors(XGL_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
+VK_LAYER_EXPORT void VKAPI vkUpdateDescriptors(VK_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)descriptorSet, XGL_OBJECT_TYPE_DESCRIPTOR_SET);
+ ll_increment_use_count((void*)descriptorSet, VK_OBJECT_TYPE_DESCRIPTOR_SET);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.UpdateDescriptors(descriptorSet, updateCount, ppUpdateArray);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pState, VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pCmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pCmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ VK_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
- reset_status((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER, (OBJSTATUS_VIEWPORT_BOUND |
+ reset_status((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER, (OBJSTATUS_VIEWPORT_BOUND |
OBJSTATUS_RASTER_BOUND |
OBJSTATUS_COLOR_BLEND_BOUND |
OBJSTATUS_DEPTH_STENCIL_BOUND));
- XGL_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindPipeline(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline)
+VK_LAYER_EXPORT void VKAPI vkCmdBindPipeline(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBindPipeline(cmdBuffer, pipelineBindPoint, pipeline);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDynamicStateObject(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDynamicStateObject(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
track_object_status((void*)cmdBuffer, stateBindPoint);
nextTable.CmdBindDynamicStateObject(cmdBuffer, stateBindPoint, state);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDescriptorSets(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDescriptorSets(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBindDescriptorSets(cmdBuffer, pipelineBindPoint, layoutChain, layoutChainSlot, count, pDescriptorSets, pUserData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindVertexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding)
+VK_LAYER_EXPORT void VKAPI vkCmdBindVertexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBindVertexBuffer(cmdBuffer, buffer, offset, binding);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindIndexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType)
+VK_LAYER_EXPORT void VKAPI vkCmdBindIndexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBindIndexBuffer(cmdBuffer, buffer, offset, indexType);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDraw(XGL_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDraw(VK_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDraw(cmdBuffer, firstVertex, vertexCount, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexed(XGL_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexed(VK_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDrawIndexed(cmdBuffer, firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDrawIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexedIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDrawIndexedIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatch(XGL_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatch(VK_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDispatch(cmdBuffer, x, y, z);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatchIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatchIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDispatchIndirect(cmdBuffer, buffer, offset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCopyBuffer(cmdBuffer, srcBuffer, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCopyImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBufferToImage(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBufferToImage(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCopyBufferToImage(cmdBuffer, srcBuffer, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImageToBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImageToBuffer(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCopyImageToBuffer(cmdBuffer, srcImage, srcImageLayout, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCloneImageData(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout)
+VK_LAYER_EXPORT void VKAPI vkCmdCloneImageData(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdCloneImageData(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdUpdateBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdUpdateBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdUpdateBuffer(cmdBuffer, destBuffer, destOffset, dataSize, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdFillBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data)
+VK_LAYER_EXPORT void VKAPI vkCmdFillBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdFillBuffer(cmdBuffer, destBuffer, destOffset, fillSize, data);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearColorImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, XGL_CLEAR_COLOR color, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearColorImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, VK_CLEAR_COLOR color, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdClearColorImage(cmdBuffer, image, imageLayout, color, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearDepthStencil(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearDepthStencil(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdClearDepthStencil(cmdBuffer, image, imageLayout, depth, stencil, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResolveImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects)
+VK_LAYER_EXPORT void VKAPI vkCmdResolveImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdResolveImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, rectCount, pRects);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdSetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdSetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdResetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdResetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWaitEvents(XGL_CMD_BUFFER cmdBuffer, const XGL_EVENT_WAIT_INFO* pWaitInfo)
+VK_LAYER_EXPORT void VKAPI vkCmdWaitEvents(VK_CMD_BUFFER cmdBuffer, const VK_EVENT_WAIT_INFO* pWaitInfo)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdWaitEvents(cmdBuffer, pWaitInfo);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdPipelineBarrier(XGL_CMD_BUFFER cmdBuffer, const XGL_PIPELINE_BARRIER* pBarrier)
+VK_LAYER_EXPORT void VKAPI vkCmdPipelineBarrier(VK_CMD_BUFFER cmdBuffer, const VK_PIPELINE_BARRIER* pBarrier)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdPipelineBarrier(cmdBuffer, pBarrier);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBeginQuery(cmdBuffer, queryPool, slot, flags);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot)
+VK_LAYER_EXPORT void VKAPI vkCmdEndQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdEndQuery(cmdBuffer, queryPool, slot);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetQueryPool(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
+VK_LAYER_EXPORT void VKAPI vkCmdResetQueryPool(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdResetQueryPool(cmdBuffer, queryPool, startQuery, queryCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWriteTimestamp(XGL_CMD_BUFFER cmdBuffer, XGL_TIMESTAMP_TYPE timestampType, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdWriteTimestamp(VK_CMD_BUFFER cmdBuffer, VK_TIMESTAMP_TYPE timestampType, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdWriteTimestamp(cmdBuffer, timestampType, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdInitAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdInitAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdInitAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdLoadAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER srcBuffer, XGL_GPU_SIZE srcOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdLoadAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER srcBuffer, VK_GPU_SIZE srcOffset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdLoadAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, srcBuffer, srcOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSaveAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdSaveAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdSaveAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(XGL_DEVICE device, const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo, XGL_FRAMEBUFFER* pFramebuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(VK_DEVICE device, const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo, VK_FRAMEBUFFER* pFramebuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pFramebuffer, XGL_OBJECT_TYPE_FRAMEBUFFER);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pFramebuffer, VK_OBJECT_TYPE_FRAMEBUFFER);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, XGL_RENDER_PASS* pRenderPass)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateRenderPass(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo, VK_RENDER_PASS* pRenderPass)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
- if (result == XGL_SUCCESS)
+ VK_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pRenderPass, XGL_OBJECT_TYPE_RENDER_PASS);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pRenderPass, VK_OBJECT_TYPE_RENDER_PASS);
pNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
}
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginRenderPass(XGL_CMD_BUFFER cmdBuffer, const XGL_RENDER_PASS_BEGIN *pRenderPassBegin)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginRenderPass(VK_CMD_BUFFER cmdBuffer, const VK_RENDER_PASS_BEGIN *pRenderPassBegin)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdBeginRenderPass(cmdBuffer, pRenderPassBegin);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndRenderPass(XGL_CMD_BUFFER cmdBuffer, XGL_RENDER_PASS renderPass)
+VK_LAYER_EXPORT void VKAPI vkCmdEndRenderPass(VK_CMD_BUFFER cmdBuffer, VK_RENDER_PASS renderPass)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdEndRenderPass(cmdBuffer, renderPass);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetValidationLevel(XGL_DEVICE device, XGL_VALIDATION_LEVEL validationLevel)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetValidationLevel(VK_DEVICE device, VK_VALIDATION_LEVEL validationLevel)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DbgSetValidationLevel(device, validationLevel);
+ VK_RESULT result = nextTable.DbgSetValidationLevel(device, validationLevel);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
// This layer intercepts callbacks
- XGL_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));
+ VK_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));
if (!pNewDbgFuncNode)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;
pNewDbgFuncNode->pUserData = pUserData;
pNewDbgFuncNode->pNext = g_pDbgFunctionHead;
g_pDbgFunctionHead = pNewDbgFuncNode;
// force callbacks if DebugAction hasn't been set already other than initial value
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;
- } XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
+ g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;
+ } VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
- XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
+ VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
+ VK_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
while (pTrav) {
if (pTrav->pfnMsgCallback == pfnMsgCallback) {
pPrev->pNext = pTrav->pNext;
if (g_pDbgFunctionHead == NULL)
{
if (g_actionIsDefault)
- g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+ g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
else
- g_debugAction &= ~XGL_DBG_LAYER_ACTION_CALLBACK;
+ g_debugAction &= ~VK_DBG_LAYER_ACTION_CALLBACK;
}
- XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
+ VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetMessageFilter(XGL_DEVICE device, int32_t msgCode, XGL_DBG_MSG_FILTER filter)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetMessageFilter(VK_DEVICE device, int32_t msgCode, VK_DBG_MSG_FILTER filter)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DbgSetMessageFilter(device, msgCode, filter);
+ VK_RESULT result = nextTable.DbgSetMessageFilter(device, msgCode, filter);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetObjectTag(XGL_BASE_OBJECT object, size_t tagSize, const void* pTag)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetObjectTag(VK_BASE_OBJECT object, size_t tagSize, const void* pTag)
{
loader_platform_thread_lock_mutex(&objLock);
ll_increment_use_count((void*)object, ll_get_obj_type(object));
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DbgSetObjectTag(object, tagSize, pTag);
+ VK_RESULT result = nextTable.DbgSetObjectTag(object, tagSize, pTag);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
{
- XGL_RESULT result = nextTable.DbgSetGlobalOption(instance, dbgOption, dataSize, pData);
+ VK_RESULT result = nextTable.DbgSetGlobalOption(instance, dbgOption, dataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetDeviceOption(XGL_DEVICE device, XGL_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetDeviceOption(VK_DEVICE device, VK_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.DbgSetDeviceOption(device, dbgOption, dataSize, pData);
+ VK_RESULT result = nextTable.DbgSetDeviceOption(device, dbgOption, dataSize, pData);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerBegin(XGL_CMD_BUFFER cmdBuffer, const char* pMarker)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerBegin(VK_CMD_BUFFER cmdBuffer, const char* pMarker)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDbgMarkerBegin(cmdBuffer, pMarker);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerEnd(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerEnd(VK_CMD_BUFFER cmdBuffer)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER);
+ ll_increment_use_count((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER);
loader_platform_thread_unlock_mutex(&objLock);
nextTable.CmdDbgMarkerEnd(cmdBuffer);
}
#if defined(__linux__) || defined(XCB_NVIDIA)
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11AssociateConnection(XGL_PHYSICAL_GPU gpu, const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11AssociateConnection(VK_PHYSICAL_GPU gpu, const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)gpu, XGL_OBJECT_TYPE_PHYSICAL_GPU);
+ ll_increment_use_count((void*)gpu, VK_OBJECT_TYPE_PHYSICAL_GPU);
loader_platform_thread_unlock_mutex(&objLock);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initGlaveSnapshot);
- XGL_RESULT result = nextTable.WsiX11AssociateConnection((XGL_PHYSICAL_GPU)gpuw->nextObject, pConnectionInfo);
+ VK_RESULT result = nextTable.WsiX11AssociateConnection((VK_PHYSICAL_GPU)gpuw->nextObject, pConnectionInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11GetMSC(XGL_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11GetMSC(VK_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.WsiX11GetMSC(device, window, crtc, pMsc);
+ VK_RESULT result = nextTable.WsiX11GetMSC(device, window, crtc, pMsc);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(XGL_DEVICE device, const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11CreatePresentableImage(VK_DEVICE device, const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
loader_platform_thread_lock_mutex(&objLock);
- ll_increment_use_count((void*)device, XGL_OBJECT_TYPE_DEVICE);
+ ll_increment_use_count((void*)device, VK_OBJECT_TYPE_DEVICE);
loader_platform_thread_unlock_mutex(&objLock);
- XGL_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
+ VK_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
- if (result == XGL_SUCCESS)
+ if (result == VK_SUCCESS)
{
loader_platform_thread_lock_mutex(&objLock);
- GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pImage, XGL_OBJECT_TYPE_IMAGE);
+ GLV_VK_SNAPSHOT_LL_NODE* pNode = snapshot_insert_object(&s_delta, *pImage, VK_OBJECT_TYPE_IMAGE);
pNode->obj.pStruct = NULL;
- GLV_VK_SNAPSHOT_LL_NODE* pMemNode = snapshot_insert_object(&s_delta, *pMem, XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);
+ GLV_VK_SNAPSHOT_LL_NODE* pMemNode = snapshot_insert_object(&s_delta, *pMem, VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);
pMemNode->obj.pStruct = NULL;
loader_platform_thread_unlock_mutex(&objLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11QueuePresent(XGL_QUEUE queue, const XGL_WSI_X11_PRESENT_INFO* pPresentInfo, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11QueuePresent(VK_QUEUE queue, const VK_WSI_X11_PRESENT_INFO* pPresentInfo, VK_FENCE fence)
{
- XGL_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
+ VK_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
return result;
}
char str[2048];
GLV_VK_SNAPSHOT_LL_NODE* pTrav = s_delta.pGlobalObjs;
sprintf(str, "==== DELTA SNAPSHOT contains %lu objects, %lu devices, and %lu deleted objects", s_delta.globalObjCount, s_delta.deviceCount, s_delta.deltaDeletedObjectCount);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
// print all objects
if (s_delta.globalObjCount > 0)
{
sprintf(str, "======== DELTA SNAPSHOT Created Objects:");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pTrav->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pTrav->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
while (pTrav != NULL)
{
- sprintf(str, "\t%s obj %p", string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pVkObject);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pTrav->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ sprintf(str, "\t%s obj %p", string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pVkObject);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pTrav->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
pTrav = pTrav->pNextGlobal;
}
}
{
GLV_VK_SNAPSHOT_LL_NODE* pDeviceNode = s_delta.pDevices;
sprintf(str, "======== DELTA SNAPSHOT Devices:");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
while (pDeviceNode != NULL)
{
GLV_VK_SNAPSHOT_DEVICE_NODE* pDev = (GLV_VK_SNAPSHOT_DEVICE_NODE*)pDeviceNode->obj.pStruct;
- char * createInfoStr = xgl_print_xgl_device_create_info(pDev->params.pCreateInfo, "\t\t");
- sprintf(str, "\t%s obj %p:\n%s", string_XGL_OBJECT_TYPE(XGL_OBJECT_TYPE_DEVICE), pDeviceNode->obj.pVkObject, createInfoStr);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pDeviceNode->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ char * createInfoStr = vk_print_vk_device_create_info(pDev->params.pCreateInfo, "\t\t");
+ sprintf(str, "\t%s obj %p:\n%s", string_VK_OBJECT_TYPE(VK_OBJECT_TYPE_DEVICE), pDeviceNode->obj.pVkObject, createInfoStr);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pDeviceNode->obj.pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
pDeviceNode = pDeviceNode->pNextObj;
}
}
{
GLV_VK_SNAPSHOT_DELETED_OBJ_NODE* pDelObjNode = s_delta.pDeltaDeletedObjects;
sprintf(str, "======== DELTA SNAPSHOT Deleted Objects:");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
while (pDelObjNode != NULL)
{
- sprintf(str, " %s obj %p", string_XGL_OBJECT_TYPE(pDelObjNode->objType), pDelObjNode->pVkObject);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pDelObjNode->pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
+ sprintf(str, " %s obj %p", string_VK_OBJECT_TYPE(pDelObjNode->objType), pDelObjNode->pVkObject);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pDelObjNode->pVkObject, 0, GLVSNAPSHOT_SNAPSHOT_DATA, LAYER_ABBREV_STR, str);
pDelObjNode = pDelObjNode->pNextObj;
}
}
//=============================================================================
// Old Exported methods
//=============================================================================
-uint64_t glvSnapshotGetObjectCount(XGL_OBJECT_TYPE type)
+uint64_t glvSnapshotGetObjectCount(VK_OBJECT_TYPE type)
{
- uint64_t retVal = (type == XGL_OBJECT_TYPE_ANY) ? s_delta.globalObjCount : s_delta.numObjs[type];
+ uint64_t retVal = (type == VK_OBJECT_TYPE_ANY) ? s_delta.globalObjCount : s_delta.numObjs[type];
return retVal;
}
-XGL_RESULT glvSnapshotGetObjects(XGL_OBJECT_TYPE type, uint64_t objCount, GLV_VK_SNAPSHOT_OBJECT_NODE *pObjNodeArray)
+VK_RESULT glvSnapshotGetObjects(VK_OBJECT_TYPE type, uint64_t objCount, GLV_VK_SNAPSHOT_OBJECT_NODE *pObjNodeArray)
{
// This bool flags if we're pulling all objs or just a single class of objs
- bool32_t bAllObjs = (type == XGL_OBJECT_TYPE_ANY);
+ bool32_t bAllObjs = (type == VK_OBJECT_TYPE_ANY);
// Check the count first thing
uint64_t maxObjCount = (bAllObjs) ? s_delta.globalObjCount : s_delta.numObjs[type];
if (objCount > maxObjCount) {
char str[1024];
- sprintf(str, "OBJ ERROR : Received objTrackGetObjects() request for %lu objs, but there are only %lu objs of type %s", objCount, maxObjCount, string_XGL_OBJECT_TYPE(type));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, 0, 0, GLVSNAPSHOT_OBJCOUNT_MAX_EXCEEDED, LAYER_ABBREV_STR, str);
- return XGL_ERROR_INVALID_VALUE;
+ sprintf(str, "OBJ ERROR : Received objTrackGetObjects() request for %lu objs, but there are only %lu objs of type %s", objCount, maxObjCount, string_VK_OBJECT_TYPE(type));
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, 0, 0, GLVSNAPSHOT_OBJCOUNT_MAX_EXCEEDED, LAYER_ABBREV_STR, str);
+ return VK_ERROR_INVALID_VALUE;
}
GLV_VK_SNAPSHOT_LL_NODE* pTrav = (bAllObjs) ? s_delta.pGlobalObjs : s_delta.pObjectHead[type];
for (uint64_t i = 0; i < objCount; i++) {
if (!pTrav) {
char str[1024];
- sprintf(str, "OBJ INTERNAL ERROR : Ran out of %s objs! Should have %lu, but only copied %lu and not the requested %lu.", string_XGL_OBJECT_TYPE(type), maxObjCount, i, objCount);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, 0, 0, GLVSNAPSHOT_INTERNAL_ERROR, LAYER_ABBREV_STR, str);
- return XGL_ERROR_UNKNOWN;
+ sprintf(str, "OBJ INTERNAL ERROR : Ran out of %s objs! Should have %lu, but only copied %lu and not the requested %lu.", string_VK_OBJECT_TYPE(type), maxObjCount, i, objCount);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, 0, 0, GLVSNAPSHOT_INTERNAL_ERROR, LAYER_ABBREV_STR, str);
+ return VK_ERROR_UNKNOWN;
}
memcpy(&pObjNodeArray[i], pTrav, sizeof(GLV_VK_SNAPSHOT_OBJECT_NODE));
pTrav = (bAllObjs) ? pTrav->pNextGlobal : pTrav->pNextObj;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
void glvSnapshotPrintObjects(void)
glvSnapshotPrintDelta();
}
-#include "xgl_generic_intercept_proc_helper.h"
-XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)
+#include "vk_generic_intercept_proc_helper.h"
+VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
void* addr;
if (gpu == NULL)
return NULL;
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);
}
}
} OBJECT_STATUS;
// Object type enum
-typedef enum _XGL_OBJECT_TYPE
+typedef enum _VK_OBJECT_TYPE
{
- XGL_OBJECT_TYPE_UNKNOWN,
- XGL_OBJECT_TYPE_SAMPLER,
- XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET,
- XGL_OBJECT_TYPE_DESCRIPTOR_POOL,
- XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT,
- XGL_OBJECT_TYPE_IMAGE_VIEW,
- XGL_OBJECT_TYPE_QUEUE_SEMAPHORE,
- XGL_OBJECT_TYPE_SHADER,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
- XGL_OBJECT_TYPE_BUFFER,
- XGL_OBJECT_TYPE_PIPELINE,
- XGL_OBJECT_TYPE_DEVICE,
- XGL_OBJECT_TYPE_QUERY_POOL,
- XGL_OBJECT_TYPE_EVENT,
- XGL_OBJECT_TYPE_QUEUE,
- XGL_OBJECT_TYPE_PHYSICAL_GPU,
- XGL_OBJECT_TYPE_RENDER_PASS,
- XGL_OBJECT_TYPE_FRAMEBUFFER,
- XGL_OBJECT_TYPE_IMAGE,
- XGL_OBJECT_TYPE_BUFFER_VIEW,
- XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW,
- XGL_OBJECT_TYPE_INSTANCE,
- XGL_OBJECT_TYPE_PIPELINE_DELTA,
- XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT,
- XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW,
- XGL_OBJECT_TYPE_GPU_MEMORY,
- XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT,
- XGL_OBJECT_TYPE_FENCE,
- XGL_OBJECT_TYPE_CMD_BUFFER,
- XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY,
+ VK_OBJECT_TYPE_UNKNOWN,
+ VK_OBJECT_TYPE_SAMPLER,
+ VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET,
+ VK_OBJECT_TYPE_DESCRIPTOR_POOL,
+ VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT,
+ VK_OBJECT_TYPE_IMAGE_VIEW,
+ VK_OBJECT_TYPE_QUEUE_SEMAPHORE,
+ VK_OBJECT_TYPE_SHADER,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
+ VK_OBJECT_TYPE_BUFFER,
+ VK_OBJECT_TYPE_PIPELINE,
+ VK_OBJECT_TYPE_DEVICE,
+ VK_OBJECT_TYPE_QUERY_POOL,
+ VK_OBJECT_TYPE_EVENT,
+ VK_OBJECT_TYPE_QUEUE,
+ VK_OBJECT_TYPE_PHYSICAL_GPU,
+ VK_OBJECT_TYPE_RENDER_PASS,
+ VK_OBJECT_TYPE_FRAMEBUFFER,
+ VK_OBJECT_TYPE_IMAGE,
+ VK_OBJECT_TYPE_BUFFER_VIEW,
+ VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW,
+ VK_OBJECT_TYPE_INSTANCE,
+ VK_OBJECT_TYPE_PIPELINE_DELTA,
+ VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT,
+ VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW,
+ VK_OBJECT_TYPE_GPU_MEMORY,
+ VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT,
+ VK_OBJECT_TYPE_FENCE,
+ VK_OBJECT_TYPE_CMD_BUFFER,
+ VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY,
- XGL_NUM_OBJECT_TYPE,
- XGL_OBJECT_TYPE_ANY, // Allow global object list to be queried/retrieved
-} XGL_OBJECT_TYPE;
+ VK_NUM_OBJECT_TYPE,
+ VK_OBJECT_TYPE_ANY, // Allow global object list to be queried/retrieved
+} VK_OBJECT_TYPE;
-static const char* string_XGL_OBJECT_TYPE(XGL_OBJECT_TYPE type) {
+static const char* string_VK_OBJECT_TYPE(VK_OBJECT_TYPE type) {
switch (type)
{
- case XGL_OBJECT_TYPE_DEVICE:
+ case VK_OBJECT_TYPE_DEVICE:
return "DEVICE";
- case XGL_OBJECT_TYPE_PIPELINE:
+ case VK_OBJECT_TYPE_PIPELINE:
return "PIPELINE";
- case XGL_OBJECT_TYPE_FENCE:
+ case VK_OBJECT_TYPE_FENCE:
return "FENCE";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT:
return "DESCRIPTOR_SET_LAYOUT";
- case XGL_OBJECT_TYPE_GPU_MEMORY:
+ case VK_OBJECT_TYPE_GPU_MEMORY:
return "GPU_MEMORY";
- case XGL_OBJECT_TYPE_QUEUE:
+ case VK_OBJECT_TYPE_QUEUE:
return "QUEUE";
- case XGL_OBJECT_TYPE_IMAGE:
+ case VK_OBJECT_TYPE_IMAGE:
return "IMAGE";
- case XGL_OBJECT_TYPE_CMD_BUFFER:
+ case VK_OBJECT_TYPE_CMD_BUFFER:
return "CMD_BUFFER";
- case XGL_OBJECT_TYPE_QUEUE_SEMAPHORE:
+ case VK_OBJECT_TYPE_QUEUE_SEMAPHORE:
return "QUEUE_SEMAPHORE";
- case XGL_OBJECT_TYPE_FRAMEBUFFER:
+ case VK_OBJECT_TYPE_FRAMEBUFFER:
return "FRAMEBUFFER";
- case XGL_OBJECT_TYPE_SAMPLER:
+ case VK_OBJECT_TYPE_SAMPLER:
return "SAMPLER";
- case XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW:
+ case VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW:
return "COLOR_ATTACHMENT_VIEW";
- case XGL_OBJECT_TYPE_BUFFER_VIEW:
+ case VK_OBJECT_TYPE_BUFFER_VIEW:
return "BUFFER_VIEW";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET:
return "DESCRIPTOR_SET";
- case XGL_OBJECT_TYPE_PHYSICAL_GPU:
+ case VK_OBJECT_TYPE_PHYSICAL_GPU:
return "PHYSICAL_GPU";
- case XGL_OBJECT_TYPE_IMAGE_VIEW:
+ case VK_OBJECT_TYPE_IMAGE_VIEW:
return "IMAGE_VIEW";
- case XGL_OBJECT_TYPE_BUFFER:
+ case VK_OBJECT_TYPE_BUFFER:
return "BUFFER";
- case XGL_OBJECT_TYPE_PIPELINE_DELTA:
+ case VK_OBJECT_TYPE_PIPELINE_DELTA:
return "PIPELINE_DELTA";
- case XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT:
return "DYNAMIC_RS_STATE_OBJECT";
- case XGL_OBJECT_TYPE_EVENT:
+ case VK_OBJECT_TYPE_EVENT:
return "EVENT";
- case XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW:
+ case VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW:
return "DEPTH_STENCIL_VIEW";
- case XGL_OBJECT_TYPE_SHADER:
+ case VK_OBJECT_TYPE_SHADER:
return "SHADER";
- case XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT:
return "DYNAMIC_DS_STATE_OBJECT";
- case XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT:
return "DYNAMIC_VP_STATE_OBJECT";
- case XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT:
return "DYNAMIC_CB_STATE_OBJECT";
- case XGL_OBJECT_TYPE_INSTANCE:
+ case VK_OBJECT_TYPE_INSTANCE:
return "INSTANCE";
- case XGL_OBJECT_TYPE_RENDER_PASS:
+ case VK_OBJECT_TYPE_RENDER_PASS:
return "RENDER_PASS";
- case XGL_OBJECT_TYPE_QUERY_POOL:
+ case VK_OBJECT_TYPE_QUERY_POOL:
return "QUERY_POOL";
- case XGL_OBJECT_TYPE_DESCRIPTOR_POOL:
+ case VK_OBJECT_TYPE_DESCRIPTOR_POOL:
return "DESCRIPTOR_POOL";
- case XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY:
+ case VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY:
return "PRESENTABLE_IMAGE_MEMORY";
default:
return "UNKNOWN";
typedef struct _GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS
{
- XGL_PHYSICAL_GPU gpu;
- XGL_DEVICE_CREATE_INFO* pCreateInfo;
- XGL_DEVICE* pDevice;
+ VK_PHYSICAL_GPU gpu;
+ VK_DEVICE_CREATE_INFO* pCreateInfo;
+ VK_DEVICE* pDevice;
} GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS;
-XGL_DEVICE_CREATE_INFO* glv_deepcopy_xgl_device_create_info(const XGL_DEVICE_CREATE_INFO* pSrcCreateInfo);void glv_deepfree_xgl_device_create_info(XGL_DEVICE_CREATE_INFO* pCreateInfo);
-void glv_vk_snapshot_copy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pDest, XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice);
+VK_DEVICE_CREATE_INFO* glv_deepcopy_xgl_device_create_info(const VK_DEVICE_CREATE_INFO* pSrcCreateInfo);void glv_deepfree_xgl_device_create_info(VK_DEVICE_CREATE_INFO* pCreateInfo);
+void glv_vk_snapshot_copy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pDest, VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice);
void glv_vk_snapshot_destroy_createdevice_params(GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS* pSrc);
//=============================================================================
// Node that stores information about an object
typedef struct _GLV_VK_SNAPSHOT_OBJECT_NODE {
void* pVkObject;
- XGL_OBJECT_TYPE objType;
+ VK_OBJECT_TYPE objType;
uint64_t numUses;
OBJECT_STATUS status;
void* pStruct; //< optionally points to a device-specific struct (ie, GLV_VK_SNAPSHOT_DEVICE_NODE)
} GLV_VK_SNAPSHOT_OBJECT_NODE;
-// Node that stores information about an XGL_DEVICE
+// Node that stores information about an VK_DEVICE
typedef struct _GLV_VK_SNAPSHOT_DEVICE_NODE {
// This object
- XGL_DEVICE device;
+ VK_DEVICE device;
// CreateDevice parameters
GLV_VK_SNAPSHOT_CREATEDEVICE_PARAMS params;
typedef struct _GLV_VK_SNAPSHOT_DELETED_OBJ_NODE {
struct _GLV_VK_SNAPSHOT_DELETED_OBJ_NODE* pNextObj;
void* pVkObject;
- XGL_OBJECT_TYPE objType;
+ VK_OBJECT_TYPE objType;
} GLV_VK_SNAPSHOT_DELETED_OBJ_NODE;
//=============================================================================
GLV_VK_SNAPSHOT_LL_NODE* pGlobalObjs;
// TEMPORARY: Keep track of all objects of each type
- uint64_t numObjs[XGL_NUM_OBJECT_TYPE];
- GLV_VK_SNAPSHOT_LL_NODE *pObjectHead[XGL_NUM_OBJECT_TYPE];
+ uint64_t numObjs[VK_NUM_OBJECT_TYPE];
+ GLV_VK_SNAPSHOT_LL_NODE *pObjectHead[VK_NUM_OBJECT_TYPE];
// List of created devices and [potentially] hierarchical tree of the objects on it.
// This is used to represent ownership of the objects
// merge a delta into a snapshot and return the updated snapshot
GLV_VK_SNAPSHOT glvSnapshotMerge(const GLV_VK_SNAPSHOT * const pDelta, const GLV_VK_SNAPSHOT * const pSnapshot);
-uint64_t glvSnapshotGetObjectCount(XGL_OBJECT_TYPE type);
-XGL_RESULT glvSnapshotGetObjects(XGL_OBJECT_TYPE type, uint64_t objCount, GLV_VK_SNAPSHOT_OBJECT_NODE* pObjNodeArray);
+uint64_t glvSnapshotGetObjectCount(VK_OBJECT_TYPE type);
+VK_RESULT glvSnapshotGetObjects(VK_OBJECT_TYPE type, uint64_t objCount, GLV_VK_SNAPSHOT_OBJECT_NODE* pObjNodeArray);
void glvSnapshotPrintObjects(void);
// Func ptr typedefs
-typedef uint64_t (*GLVSNAPSHOT_GET_OBJECT_COUNT)(XGL_OBJECT_TYPE);
-typedef XGL_RESULT (*GLVSNAPSHOT_GET_OBJECTS)(XGL_OBJECT_TYPE, uint64_t, GLV_VK_SNAPSHOT_OBJECT_NODE*);
+typedef uint64_t (*GLVSNAPSHOT_GET_OBJECT_COUNT)(VK_OBJECT_TYPE);
+typedef VK_RESULT (*GLVSNAPSHOT_GET_OBJECTS)(VK_OBJECT_TYPE, uint64_t, GLV_VK_SNAPSHOT_OBJECT_NODE*);
typedef void (*GLVSNAPSHOT_PRINT_OBJECTS)(void);
typedef void (*GLVSNAPSHOT_START_TRACKING)(void);
typedef GLV_VK_SNAPSHOT (*GLVSNAPSHOT_GET_DELTA)(void);
#include <string>
#include <map>
#include <string.h>
-#include <xglLayer.h>
+#include <vkLayer.h>
#include "loader_platform.h"
#include "layers_config.h"
// The following is #included again to catch certain OS-specific functions
static unsigned int convertStringEnumVal(const char *_enum)
{
// only handles single enum values
- if (!strcmp(_enum, "XGL_DBG_LAYER_ACTION_IGNORE"))
- return XGL_DBG_LAYER_ACTION_IGNORE;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_ACTION_CALLBACK"))
- return XGL_DBG_LAYER_ACTION_CALLBACK;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_ACTION_LOG_MSG"))
- return XGL_DBG_LAYER_ACTION_LOG_MSG;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_ACTION_BREAK"))
- return XGL_DBG_LAYER_ACTION_BREAK;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_INFO"))
- return XGL_DBG_LAYER_LEVEL_INFO;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_WARN"))
- return XGL_DBG_LAYER_LEVEL_WARN;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_PERF_WARN"))
- return XGL_DBG_LAYER_LEVEL_PERF_WARN;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_ERROR"))
- return XGL_DBG_LAYER_LEVEL_ERROR;
- else if (!strcmp(_enum, "XGL_DBG_LAYER_LEVEL_NONE"))
- return XGL_DBG_LAYER_LEVEL_NONE;
+ if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_IGNORE"))
+ return VK_DBG_LAYER_ACTION_IGNORE;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_CALLBACK"))
+ return VK_DBG_LAYER_ACTION_CALLBACK;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_LOG_MSG"))
+ return VK_DBG_LAYER_ACTION_LOG_MSG;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_ACTION_BREAK"))
+ return VK_DBG_LAYER_ACTION_BREAK;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_INFO"))
+ return VK_DBG_LAYER_LEVEL_INFO;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_WARN"))
+ return VK_DBG_LAYER_LEVEL_WARN;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_PERF_WARN"))
+ return VK_DBG_LAYER_LEVEL_PERF_WARN;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_ERROR"))
+ return VK_DBG_LAYER_LEVEL_ERROR;
+ else if (!strcmp(_enum, "VK_DBG_LAYER_LEVEL_NONE"))
+ return VK_DBG_LAYER_LEVEL_NONE;
return 0;
}
const char *getLayerOption(const char *_option)
std::map<std::string, std::string>::const_iterator it;
if (!m_fileIsParsed)
{
- parseFile("xgl_layer_settings.txt");
+ parseFile("vk_layer_settings.txt");
}
if ((it = m_valueMap.find(_option)) == m_valueMap.end())
{
if (!m_fileIsParsed)
{
- parseFile("xgl_layer_settings.txt");
+ parseFile("vk_layer_settings.txt");
}
m_valueMap[_option] = _val;
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <stdio.h>
#include <stdbool.h>
-static XGL_LAYER_DBG_FUNCTION_NODE *g_pDbgFunctionHead = NULL;
-static XGL_LAYER_DBG_REPORT_LEVEL g_reportingLevel = XGL_DBG_LAYER_LEVEL_INFO;
-static XGL_LAYER_DBG_ACTION g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+static VK_LAYER_DBG_FUNCTION_NODE *g_pDbgFunctionHead = NULL;
+static VK_LAYER_DBG_REPORT_LEVEL g_reportingLevel = VK_DBG_LAYER_LEVEL_INFO;
+static VK_LAYER_DBG_ACTION g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
static bool g_actionIsDefault = true;
static FILE *g_logFile = NULL;
// Utility function to handle reporting
// If callbacks are enabled, use them, otherwise use printf
-static void layerCbMsg(XGL_DBG_MSG_TYPE msgType,
- XGL_VALIDATION_LEVEL validationLevel,
- XGL_BASE_OBJECT srcObject,
+static void layerCbMsg(VK_DBG_MSG_TYPE msgType,
+ VK_VALIDATION_LEVEL validationLevel,
+ VK_BASE_OBJECT srcObject,
size_t location,
int32_t msgCode,
const char* pLayerPrefix,
g_logFile = stdout;
}
- if (g_debugAction & (XGL_DBG_LAYER_ACTION_LOG_MSG | XGL_DBG_LAYER_ACTION_CALLBACK)) {
- XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
+ if (g_debugAction & (VK_DBG_LAYER_ACTION_LOG_MSG | VK_DBG_LAYER_ACTION_CALLBACK)) {
+ VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
switch (msgType) {
- case XGL_DBG_MSG_ERROR:
- if (g_reportingLevel <= XGL_DBG_LAYER_LEVEL_ERROR) {
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG) {
+ case VK_DBG_MSG_ERROR:
+ if (g_reportingLevel <= VK_DBG_LAYER_LEVEL_ERROR) {
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG) {
fprintf(g_logFile, "{%s}ERROR : %s\n", pLayerPrefix, pMsg);
fflush(g_logFile);
}
- if (g_debugAction & XGL_DBG_LAYER_ACTION_CALLBACK)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_CALLBACK)
while (pTrav) {
pTrav->pfnMsgCallback(msgType, validationLevel, srcObject, location, msgCode, pMsg, pTrav->pUserData);
pTrav = pTrav->pNext;
}
}
break;
- case XGL_DBG_MSG_WARNING:
- if (g_reportingLevel <= XGL_DBG_LAYER_LEVEL_WARN) {
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ case VK_DBG_MSG_WARNING:
+ if (g_reportingLevel <= VK_DBG_LAYER_LEVEL_WARN) {
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
fprintf(g_logFile, "{%s}WARN : %s\n", pLayerPrefix, pMsg);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_CALLBACK)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_CALLBACK)
while (pTrav) {
pTrav->pfnMsgCallback(msgType, validationLevel, srcObject, location, msgCode, pMsg, pTrav->pUserData);
pTrav = pTrav->pNext;
}
}
break;
- case XGL_DBG_MSG_PERF_WARNING:
- if (g_reportingLevel <= XGL_DBG_LAYER_LEVEL_PERF_WARN) {
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ case VK_DBG_MSG_PERF_WARNING:
+ if (g_reportingLevel <= VK_DBG_LAYER_LEVEL_PERF_WARN) {
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
fprintf(g_logFile, "{%s}PERF_WARN : %s\n", pLayerPrefix, pMsg);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_CALLBACK)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_CALLBACK)
while (pTrav) {
pTrav->pfnMsgCallback(msgType, validationLevel, srcObject, location, msgCode, pMsg, pTrav->pUserData);
pTrav = pTrav->pNext;
}
break;
default:
- if (g_reportingLevel <= XGL_DBG_LAYER_LEVEL_INFO) {
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_reportingLevel <= VK_DBG_LAYER_LEVEL_INFO) {
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
fprintf(g_logFile, "{%s}INFO : %s\n", pLayerPrefix, pMsg);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_CALLBACK)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_CALLBACK)
while (pTrav) {
pTrav->pfnMsgCallback(msgType, validationLevel, srcObject, location, msgCode, pMsg, pTrav->pUserData);
pTrav = pTrav->pNext;
using namespace std;
#include "loader_platform.h"
-#include "xgl_dispatch_table_helper.h"
-#include "xgl_struct_string_helper_cpp.h"
+#include "vk_dispatch_table_helper.h"
+#include "vk_struct_string_helper_cpp.h"
#include "mem_tracker.h"
#include "layers_config.h"
// The following is #included again to catch certain OS-specific functions
#include "loader_platform.h"
#include "layers_msg.h"
-static XGL_LAYER_DISPATCH_TABLE nextTable;
-static XGL_BASE_LAYER_OBJECT *pCurObj;
+static VK_LAYER_DISPATCH_TABLE nextTable;
+static VK_BASE_LAYER_OBJECT *pCurObj;
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(g_initOnce);
// TODO : This can be much smarter, using separate locks for separate global data
static int globalLockInitialized = 0;
#define MAX_BINDING 0xFFFFFFFF
-map<XGL_CMD_BUFFER, MT_CB_INFO*> cbMap;
-map<XGL_GPU_MEMORY, MT_MEM_OBJ_INFO*> memObjMap;
-map<XGL_OBJECT, MT_OBJ_INFO*> objectMap;
+map<VK_CMD_BUFFER, MT_CB_INFO*> cbMap;
+map<VK_GPU_MEMORY, MT_MEM_OBJ_INFO*> memObjMap;
+map<VK_OBJECT, MT_OBJ_INFO*> objectMap;
map<uint64_t, MT_FENCE_INFO*> fenceMap; // Map fenceId to fence info
-map<XGL_QUEUE, MT_QUEUE_INFO*> queueMap;
+map<VK_QUEUE, MT_QUEUE_INFO*> queueMap;
// TODO : Add per-device fence completion
static uint64_t g_currentFenceId = 1;
-static XGL_DEVICE globalDevice = NULL;
+static VK_DEVICE globalDevice = NULL;
// Add new queue for this device to map container
-static void addQueueInfo(const XGL_QUEUE queue)
+static void addQueueInfo(const VK_QUEUE queue)
{
MT_QUEUE_INFO* pInfo = new MT_QUEUE_INFO;
pInfo->lastRetiredId = 0;
static void deleteQueueInfoList(void)
{
// Process queue list, cleaning up each entry before deleting
- for (map<XGL_QUEUE, MT_QUEUE_INFO*>::iterator ii=queueMap.begin(); ii!=queueMap.end(); ++ii) {
+ for (map<VK_QUEUE, MT_QUEUE_INFO*>::iterator ii=queueMap.begin(); ii!=queueMap.end(); ++ii) {
(*ii).second->pQueueCmdBuffers.clear();
}
queueMap.clear();
}
// Add new CBInfo for this cb to map container
-static void addCBInfo(const XGL_CMD_BUFFER cb)
+static void addCBInfo(const VK_CMD_BUFFER cb)
{
MT_CB_INFO* pInfo = new MT_CB_INFO;
- memset(pInfo, 0, (sizeof(MT_CB_INFO) - sizeof(list<XGL_GPU_MEMORY>)));
+ memset(pInfo, 0, (sizeof(MT_CB_INFO) - sizeof(list<VK_GPU_MEMORY>)));
pInfo->cmdBuffer = cb;
cbMap[cb] = pInfo;
}
// Return ptr to Info in CB map, or NULL if not found
-static MT_CB_INFO* getCBInfo(const XGL_CMD_BUFFER cb)
+static MT_CB_INFO* getCBInfo(const VK_CMD_BUFFER cb)
{
MT_CB_INFO* pCBInfo = NULL;
if (cbMap.find(cb) != cbMap.end()) {
}
// Return object info for 'object' or return NULL if no info exists
-static MT_OBJ_INFO* getObjectInfo(const XGL_OBJECT object)
+static MT_OBJ_INFO* getObjectInfo(const VK_OBJECT object)
{
MT_OBJ_INFO* pObjInfo = NULL;
return pObjInfo;
}
-static MT_OBJ_INFO* addObjectInfo(XGL_OBJECT object, XGL_STRUCTURE_TYPE sType, const void *pCreateInfo, const int struct_size, const char *name_prefix)
+static MT_OBJ_INFO* addObjectInfo(VK_OBJECT object, VK_STRUCTURE_TYPE sType, const void *pCreateInfo, const int struct_size, const char *name_prefix)
{
MT_OBJ_INFO* pInfo = new MT_OBJ_INFO;
memset(pInfo, 0, sizeof(MT_OBJ_INFO));
}
// Add a fence, creating one if necessary to our list of fences/fenceIds
-static uint64_t addFenceInfo(XGL_FENCE fence, XGL_QUEUE queue)
+static uint64_t addFenceInfo(VK_FENCE fence, VK_QUEUE queue)
{
// Create fence object
MT_FENCE_INFO* pFenceInfo = new MT_FENCE_INFO;
memset(pFenceInfo, 0, sizeof(MT_FENCE_INFO));
// If no fence, create an internal fence to track the submissions
if (fence == NULL) {
- XGL_FENCE_CREATE_INFO fci;
- fci.sType = XGL_STRUCTURE_TYPE_FENCE_CREATE_INFO;
+ VK_FENCE_CREATE_INFO fci;
+ fci.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO;
fci.pNext = NULL;
- fci.flags = static_cast<XGL_FENCE_CREATE_FLAGS>(0);
+ fci.flags = static_cast<VK_FENCE_CREATE_FLAGS>(0);
nextTable.CreateFence(globalDevice, &fci, &pFenceInfo->fence);
- addObjectInfo(pFenceInfo->fence, fci.sType, &fci, sizeof(XGL_FENCE_CREATE_INFO), "internalFence");
- pFenceInfo->localFence = XGL_TRUE;
+ addObjectInfo(pFenceInfo->fence, fci.sType, &fci, sizeof(VK_FENCE_CREATE_INFO), "internalFence");
+ pFenceInfo->localFence = VK_TRUE;
} else {
- pFenceInfo->localFence = XGL_FALSE;
+ pFenceInfo->localFence = VK_FALSE;
pFenceInfo->fence = fence;
}
pFenceInfo->queue = queue;
map<uint64_t, MT_FENCE_INFO*>::iterator item;
MT_FENCE_INFO* pDelInfo = fenceMap[fenceId];
if (pDelInfo != NULL) {
- if (pDelInfo->localFence == XGL_TRUE) {
+ if (pDelInfo->localFence == VK_TRUE) {
nextTable.DestroyObject(pDelInfo->fence);
}
delete pDelInfo;
}
// Search through list for this fence, deleting all items before it (with lower IDs) and updating lastRetiredId
-static void updateFenceTracking(XGL_FENCE fence)
+static void updateFenceTracking(VK_FENCE fence)
{
MT_FENCE_INFO *pCurFenceInfo = NULL;
uint64_t fenceId = 0;
- XGL_QUEUE queue = NULL;
+ VK_QUEUE queue = NULL;
for (map<uint64_t, MT_FENCE_INFO*>::iterator ii=fenceMap.begin(); ii!=fenceMap.end(); ++ii) {
if ((*ii).second != NULL) {
// Update fence state in fenceCreateInfo structure
MT_OBJ_INFO* pObjectInfo = getObjectInfo(fence);
if (pObjectInfo != NULL) {
- pObjectInfo->create_info.fence_create_info.flags = XGL_FENCE_CREATE_SIGNALED_BIT;
+ pObjectInfo->create_info.fence_create_info.flags = VK_FENCE_CREATE_SIGNALED_BIT;
}
}
}
// Utility function that determines if a fenceId has been retired yet
static bool32_t fenceRetired(uint64_t fenceId)
{
- bool32_t result = XGL_FALSE;
+ bool32_t result = VK_FALSE;
MT_FENCE_INFO* pFenceInfo = fenceMap[fenceId];
if (pFenceInfo != 0)
{
MT_QUEUE_INFO* pQueueInfo = queueMap[pFenceInfo->queue];
if (fenceId <= pQueueInfo->lastRetiredId)
{
- result = XGL_TRUE;
+ result = VK_TRUE;
}
} else { // If not in list, fence has been retired and deleted
- result = XGL_TRUE;
+ result = VK_TRUE;
}
return result;
}
// Return the fence associated with a fenceId
-static XGL_FENCE getFenceFromId(uint64_t fenceId)
+static VK_FENCE getFenceFromId(uint64_t fenceId)
{
- XGL_FENCE fence = NULL;
+ VK_FENCE fence = NULL;
if (fenceId != 0) {
// Search for an item with this fenceId
if (fenceMap.find(fenceId) != fenceMap.end()) {
}
// Helper routine that updates the fence list for a specific queue to all-retired
-static void retireQueueFences(XGL_QUEUE queue)
+static void retireQueueFences(VK_QUEUE queue)
{
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
pQueueInfo->lastRetiredId = pQueueInfo->lastSubmittedId;
}
// Helper routine that updates fence list for all queues to all-retired
-static void retireDeviceFences(XGL_DEVICE device)
+static void retireDeviceFences(VK_DEVICE device)
{
// Process each queue for device
// TODO: Add multiple device support
- for (map<XGL_QUEUE, MT_QUEUE_INFO*>::iterator ii=queueMap.begin(); ii!=queueMap.end(); ++ii) {
+ for (map<VK_QUEUE, MT_QUEUE_INFO*>::iterator ii=queueMap.begin(); ii!=queueMap.end(); ++ii) {
retireQueueFences((*ii).first);
}
}
// Returns True if a memory reference is present in a Queue's memory reference list
// Queue is validated by caller
static bool32_t checkMemRef(
- XGL_QUEUE queue,
- XGL_GPU_MEMORY mem)
+ VK_QUEUE queue,
+ VK_GPU_MEMORY mem)
{
- bool32_t result = XGL_FALSE;
- list<XGL_GPU_MEMORY>::iterator it;
+ bool32_t result = VK_FALSE;
+ list<VK_GPU_MEMORY>::iterator it;
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
for (it = pQueueInfo->pMemRefList.begin(); it != pQueueInfo->pMemRefList.end(); ++it) {
if ((*it) == mem) {
- result = XGL_TRUE;
+ result = VK_TRUE;
break;
}
}
}
static bool32_t validateQueueMemRefs(
- XGL_QUEUE queue,
+ VK_QUEUE queue,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER *pCmdBuffers)
+ const VK_CMD_BUFFER *pCmdBuffers)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
// Verify Queue
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
if (pQueueInfo == NULL) {
char str[1024];
- sprintf(str, "Unknown Queue %p specified in xglQueueSubmit", queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
+ sprintf(str, "Unknown Queue %p specified in vkQueueSubmit", queue);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
}
else {
// Iterate through all CBs in pCmdBuffers
MT_CB_INFO* pCBInfo = getCBInfo(pCmdBuffers[i]);
if (!pCBInfo) {
char str[1024];
- sprintf(str, "Unable to find info for CB %p in order to check memory references in xglQueueSubmit for queue %p", (void*)pCmdBuffers[i], queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_INVALID_CB, "MEM", str);
- result = XGL_FALSE;
+ sprintf(str, "Unable to find info for CB %p in order to check memory references in vkQueueSubmit for queue %p", (void*)pCmdBuffers[i], queue);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_INVALID_CB, "MEM", str);
+ result = VK_FALSE;
} else {
// Validate that all actual references are accounted for in pMemRefs
- for (list<XGL_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
+ for (list<VK_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
// Search for each memref in queues memreflist.
if (checkMemRef(queue, *it)) {
char str[1024];
sprintf(str, "Found Mem Obj %p binding to CB %p for queue %p", (*it), pCmdBuffers[i], queue);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_NONE, "MEM", str);
}
else {
char str[1024];
sprintf(str, "Queue %p Memory reference list for Command Buffer %p is missing ref to mem obj %p", queue, pCmdBuffers[i], (*it));
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pCmdBuffers[i], 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
+ result = VK_FALSE;
}
}
}
}
- if (result == XGL_TRUE) {
+ if (result == VK_TRUE) {
char str[1024];
sprintf(str, "Verified all memory dependencies for Queue %p are included in pMemRefs list", queue);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_NONE, "MEM", str);
// TODO : Could report mem refs in pMemRefs that AREN'T in mem list, that would be primarily informational
// Currently just noting that there is a difference
}
// Return ptr to info in map container containing mem, or NULL if not found
// Calls to this function should be wrapped in mutex
-static MT_MEM_OBJ_INFO* getMemObjInfo(const XGL_GPU_MEMORY mem)
+static MT_MEM_OBJ_INFO* getMemObjInfo(const VK_GPU_MEMORY mem)
{
MT_MEM_OBJ_INFO* pMemObjInfo = NULL;
return pMemObjInfo;
}
-static void addMemObjInfo(const XGL_GPU_MEMORY mem, const XGL_MEMORY_ALLOC_INFO* pAllocInfo)
+static void addMemObjInfo(const VK_GPU_MEMORY mem, const VK_MEMORY_ALLOC_INFO* pAllocInfo)
{
MT_MEM_OBJ_INFO* pInfo = new MT_MEM_OBJ_INFO;
pInfo->refCount = 0;
- memset(&pInfo->allocInfo, 0, sizeof(XGL_MEMORY_ALLOC_INFO));
+ memset(&pInfo->allocInfo, 0, sizeof(VK_MEMORY_ALLOC_INFO));
- if (pAllocInfo) { // MEM alloc created by xglWsiX11CreatePresentableImage() doesn't have alloc info struct
- memcpy(&pInfo->allocInfo, pAllocInfo, sizeof(XGL_MEMORY_ALLOC_INFO));
+ if (pAllocInfo) { // MEM alloc created by vkWsiX11CreatePresentableImage() doesn't have alloc info struct
+ memcpy(&pInfo->allocInfo, pAllocInfo, sizeof(VK_MEMORY_ALLOC_INFO));
// TODO: Update for real hardware, actually process allocation info structures
pInfo->allocInfo.pNext = NULL;
}
// Find CB Info and add mem binding to list container
// Find Mem Obj Info and add CB binding to list container
-static bool32_t updateCBBinding(const XGL_CMD_BUFFER cb, const XGL_GPU_MEMORY mem)
+static bool32_t updateCBBinding(const VK_CMD_BUFFER cb, const VK_GPU_MEMORY mem)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
// First update CB binding in MemObj mini CB list
MT_MEM_OBJ_INFO* pMemInfo = getMemObjInfo(mem);
if (!pMemInfo) {
char str[1024];
sprintf(str, "Trying to bind mem obj %p to CB %p but no info for that mem obj.\n Was it correctly allocated? Did it already get freed?", mem, cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ result = VK_FALSE;
} else {
// Search for cmd buffer object in memory object's binding list
- bool32_t found = XGL_FALSE;
- for (list<XGL_CMD_BUFFER>::iterator it = pMemInfo->pCmdBufferBindings.begin(); it != pMemInfo->pCmdBufferBindings.end(); ++it) {
+ bool32_t found = VK_FALSE;
+ for (list<VK_CMD_BUFFER>::iterator it = pMemInfo->pCmdBufferBindings.begin(); it != pMemInfo->pCmdBufferBindings.end(); ++it) {
if ((*it) == cb) {
- found = XGL_TRUE;
+ found = VK_TRUE;
break;
}
}
// If not present, add to list
- if (found == XGL_FALSE) {
+ if (found == VK_FALSE) {
pMemInfo->pCmdBufferBindings.push_front(cb);
pMemInfo->refCount++;
}
if (!pCBInfo) {
char str[1024];
sprintf(str, "Trying to bind mem obj %p to CB %p but no info for that CB. Was it CB incorrectly destroyed?", mem, cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ result = VK_FALSE;
} else {
// Search for memory object in cmd buffer's binding list
- bool32_t found = XGL_FALSE;
- for (list<XGL_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
+ bool32_t found = VK_FALSE;
+ for (list<VK_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
if ((*it) == mem) {
- found = XGL_TRUE;
+ found = VK_TRUE;
break;
}
}
// If not present, add to list
- if (found == XGL_FALSE) {
+ if (found == VK_FALSE) {
pCBInfo->pMemObjList.push_front(mem);
}
}
// Clear the CB Binding for mem
// Calls to this function should be wrapped in mutex
-static void clearCBBinding(const XGL_CMD_BUFFER cb, const XGL_GPU_MEMORY mem)
+static void clearCBBinding(const VK_CMD_BUFFER cb, const VK_GPU_MEMORY mem)
{
MT_MEM_OBJ_INFO* pInfo = getMemObjInfo(mem);
// TODO : Having this check is not ideal, really if memInfo was deleted,
}
// Free bindings related to CB
-static bool32_t freeCBBindings(const XGL_CMD_BUFFER cb)
+static bool32_t freeCBBindings(const VK_CMD_BUFFER cb)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
MT_CB_INFO* pCBInfo = getCBInfo(cb);
if (!pCBInfo) {
char str[1024];
sprintf(str, "Unable to find global CB info %p for deletion", cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_CB, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_CB, "MEM", str);
+ result = VK_FALSE;
} else {
if (!fenceRetired(pCBInfo->fenceId)) {
deleteFenceInfo(pCBInfo->fenceId);
}
- for (list<XGL_GPU_MEMORY>::iterator it=pCBInfo->pMemObjList.begin(); it!=pCBInfo->pMemObjList.end(); ++it) {
+ for (list<VK_GPU_MEMORY>::iterator it=pCBInfo->pMemObjList.begin(); it!=pCBInfo->pMemObjList.end(); ++it) {
clearCBBinding(cb, (*it));
}
pCBInfo->pMemObjList.clear();
// Delete CBInfo from list along with all of it's mini MemObjInfo
// and also clear mem references to CB
// TODO : When should this be called? There's no Destroy of CBs that I see
-static bool32_t deleteCBInfo(const XGL_CMD_BUFFER cb)
+static bool32_t deleteCBInfo(const VK_CMD_BUFFER cb)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
result = freeCBBindings(cb);
// Delete the CBInfo info
- if (result == XGL_TRUE) {
+ if (result == VK_TRUE) {
if (cbMap.find(cb) != cbMap.end()) {
MT_CB_INFO* pDelInfo = cbMap[cb];
delete pDelInfo;
// Delete the entire CB list
static bool32_t deleteCBInfoList()
{
- bool32_t result = XGL_TRUE;
- for (map<XGL_CMD_BUFFER, MT_CB_INFO*>::iterator ii=cbMap.begin(); ii!=cbMap.end(); ++ii) {
+ bool32_t result = VK_TRUE;
+ for (map<VK_CMD_BUFFER, MT_CB_INFO*>::iterator ii=cbMap.begin(); ii!=cbMap.end(); ++ii) {
freeCBBindings((*ii).first);
delete (*ii).second;
}
{
uint32_t refCount = 0; // Count found references
- for (list<XGL_CMD_BUFFER>::const_iterator it = pMemObjInfo->pCmdBufferBindings.begin(); it != pMemObjInfo->pCmdBufferBindings.end(); ++it) {
+ for (list<VK_CMD_BUFFER>::const_iterator it = pMemObjInfo->pCmdBufferBindings.begin(); it != pMemObjInfo->pCmdBufferBindings.end(); ++it) {
refCount++;
char str[1024];
sprintf(str, "Command Buffer %p has reference to mem obj %p", (*it), pMemObjInfo->mem);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (*it), 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, (*it), 0, MEMTRACK_NONE, "MEM", str);
}
- for (list<XGL_OBJECT>::const_iterator it = pMemObjInfo->pObjBindings.begin(); it != pMemObjInfo->pObjBindings.end(); ++it) {
+ for (list<VK_OBJECT>::const_iterator it = pMemObjInfo->pObjBindings.begin(); it != pMemObjInfo->pObjBindings.end(); ++it) {
char str[1024];
- sprintf(str, "XGL Object %p has reference to mem obj %p", (*it), pMemObjInfo->mem);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, (*it), 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, "VK Object %p has reference to mem obj %p", (*it), pMemObjInfo->mem);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, (*it), 0, MEMTRACK_NONE, "MEM", str);
}
if (refCount != pMemObjInfo->refCount) {
char str[1024];
sprintf(str, "Refcount of %u for Mem Obj %p does't match reported refs of %u", pMemObjInfo->refCount, pMemObjInfo->mem, refCount);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pMemObjInfo->mem, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pMemObjInfo->mem, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
}
}
-static void deleteMemObjInfo(XGL_GPU_MEMORY mem)
+static void deleteMemObjInfo(VK_GPU_MEMORY mem)
{
MT_MEM_OBJ_INFO* pDelInfo = memObjMap[mem];
if (memObjMap.find(mem) != memObjMap.end()) {
}
// Check if fence for given CB is completed
-static bool32_t checkCBCompleted(const XGL_CMD_BUFFER cb)
+static bool32_t checkCBCompleted(const VK_CMD_BUFFER cb)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
MT_CB_INFO* pCBInfo = getCBInfo(cb);
if (!pCBInfo) {
char str[1024];
sprintf(str, "Unable to find global CB info %p to check for completion", cb);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_CB, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_INVALID_CB, "MEM", str);
+ result = VK_FALSE;
} else {
if (!fenceRetired(pCBInfo->fenceId)) {
char str[1024];
sprintf(str, "FenceId %" PRIx64", fence %p for CB %p has not been checked for completion", pCBInfo->fenceId, getFenceFromId(pCBInfo->fenceId), cb);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_NONE, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, cb, 0, MEMTRACK_NONE, "MEM", str);
+ result = VK_FALSE;
}
}
return result;
}
-static bool32_t freeMemObjInfo(XGL_GPU_MEMORY mem, bool internal)
+static bool32_t freeMemObjInfo(VK_GPU_MEMORY mem, bool internal)
{
- bool32_t result = XGL_TRUE;
+ bool32_t result = VK_TRUE;
// Parse global list to find info w/ mem
MT_MEM_OBJ_INFO* pInfo = getMemObjInfo(mem);
if (!pInfo) {
char str[1024];
sprintf(str, "Couldn't find mem info object for %p\n Was %p never allocated or previously freed?", (void*)mem, (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ result = VK_FALSE;
} else {
if (pInfo->allocInfo.allocationSize == 0 && !internal) {
char str[1024];
sprintf(str, "Attempting to free memory associated with a Presentable Image, %p, this should not be explicitly freed\n", (void*)mem);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
- result = XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ result = VK_FALSE;
} else {
// Clear any CB bindings for completed CBs
// TODO : Is there a better place to do this?
- list<XGL_CMD_BUFFER>::iterator it = pInfo->pCmdBufferBindings.begin();
- list<XGL_CMD_BUFFER>::iterator temp;
+ list<VK_CMD_BUFFER>::iterator it = pInfo->pCmdBufferBindings.begin();
+ list<VK_CMD_BUFFER>::iterator temp;
while (it != pInfo->pCmdBufferBindings.end()) {
- if (XGL_TRUE == checkCBCompleted(*it)) {
+ if (VK_TRUE == checkCBCompleted(*it)) {
temp = it;
++temp;
freeCBBindings(*it);
// If references remain, report the error and can search CB list to find references
char str[1024];
sprintf(str, "Freeing mem obj %p while it still has references", (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_FREED_MEM_REF, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_FREED_MEM_REF, "MEM", str);
reportMemReferences(pInfo);
- result = XGL_FALSE;
+ result = VK_FALSE;
}
// Delete mem obj info
deleteMemObjInfo(mem);
// 1. Remove ObjectInfo from MemObjInfo list container of obj bindings & free it
// 2. Decrement refCount for MemObjInfo
// 3. Clear MemObjInfo ptr from ObjectInfo
-static bool32_t clearObjectBinding(XGL_OBJECT object)
+static bool32_t clearObjectBinding(VK_OBJECT object)
{
- bool32_t result = XGL_FALSE;
+ bool32_t result = VK_FALSE;
MT_OBJ_INFO* pObjInfo = getObjectInfo(object);
if (!pObjInfo) {
char str[1024];
sprintf(str, "Attempting to clear mem binding for object %p: devices, queues, command buffers, shaders and memory objects do not have external memory requirements and it is unneccessary to call bind/unbindObjectMemory on them.", object);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INVALID_OBJECT, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INVALID_OBJECT, "MEM", str);
} else {
if (!pObjInfo->pMemObjInfo) {
char str[1024];
sprintf(str, "Attempting to clear mem binding on obj %p but it has no binding.", (void*)object);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS, "MEM", str);
} else {
- for (list<XGL_OBJECT>::iterator it = pObjInfo->pMemObjInfo->pObjBindings.begin(); it != pObjInfo->pMemObjInfo->pObjBindings.end(); ++it) {
+ for (list<VK_OBJECT>::iterator it = pObjInfo->pMemObjInfo->pObjBindings.begin(); it != pObjInfo->pMemObjInfo->pObjBindings.end(); ++it) {
pObjInfo->pMemObjInfo->refCount--;
pObjInfo->pMemObjInfo = NULL;
it = pObjInfo->pMemObjInfo->pObjBindings.erase(it);
- result = XGL_TRUE;
+ result = VK_TRUE;
break;
}
- if (result == XGL_FALSE) {
+ if (result == VK_FALSE) {
char str[1024];
sprintf(str, "While trying to clear mem binding for object %p, unable to find that object referenced by mem obj %p",
object, pObjInfo->pMemObjInfo->mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
}
}
}
// IF a previous binding existed, clear it
// Add reference from objectInfo to memoryInfo
// Add reference off of objInfo
-// Return XGL_TRUE if addition is successful, XGL_FALSE otherwise
-static bool32_t updateObjectBinding(XGL_OBJECT object, XGL_GPU_MEMORY mem)
+// Return VK_TRUE if addition is successful, VK_FALSE otherwise
+static bool32_t updateObjectBinding(VK_OBJECT object, VK_GPU_MEMORY mem)
{
- bool32_t result = XGL_FALSE;
+ bool32_t result = VK_FALSE;
// Handle NULL case separately, just clear previous binding & decrement reference
- if (mem == XGL_NULL_HANDLE) {
+ if (mem == VK_NULL_HANDLE) {
clearObjectBinding(object);
- result = XGL_TRUE;
+ result = VK_TRUE;
} else {
char str[1024];
MT_OBJ_INFO* pObjInfo = getObjectInfo(object);
if (!pObjInfo) {
sprintf(str, "Attempting to update Binding of Obj(%p) that's not in global list()", (void*)object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
- return XGL_FALSE;
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
+ return VK_FALSE;
}
// non-null case so should have real mem obj
MT_MEM_OBJ_INFO* pInfo = getMemObjInfo(mem);
if (!pInfo) {
sprintf(str, "While trying to bind mem for obj %p, couldn't find info for mem obj %p", (void*)object, (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_OBJ, "MEM", str);
} else {
// Search for object in memory object's binding list
- bool32_t found = XGL_FALSE;
- for (list<XGL_OBJECT>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
+ bool32_t found = VK_FALSE;
+ for (list<VK_OBJECT>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
if ((*it) == object) {
- found = XGL_TRUE;
+ found = VK_TRUE;
break;
}
}
// If not present, add to list
- if (found == XGL_FALSE) {
+ if (found == VK_FALSE) {
pInfo->pObjBindings.push_front(object);
pInfo->refCount++;
}
if (pObjInfo->pMemObjInfo) {
clearObjectBinding(object); // Need to clear the previous object binding before setting new binding
sprintf(str, "Updating memory binding for object %p from mem obj %p to %p", object, pObjInfo->pMemObjInfo->mem, mem);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_NONE, "MEM", str);
}
// For image objects, make sure default memory state is correctly set
// TODO : What's the best/correct way to handle this?
- if (XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO == pObjInfo->sType) {
- if (pObjInfo->create_info.image_create_info.usage & (XGL_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | XGL_IMAGE_USAGE_DEPTH_STENCIL_BIT)) {
+ if (VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO == pObjInfo->sType) {
+ if (pObjInfo->create_info.image_create_info.usage & (VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT | VK_IMAGE_USAGE_DEPTH_STENCIL_BIT)) {
// TODO:: More memory state transition stuff.
}
}
pObjInfo->pMemObjInfo = pInfo;
}
}
- return XGL_TRUE;
+ return VK_TRUE;
}
// Print details of global Obj tracking list
MT_OBJ_INFO* pInfo = NULL;
char str[1024];
sprintf(str, "Details of Object list of size %lu elements", objectMap.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (map<XGL_OBJECT, MT_OBJ_INFO*>::iterator ii=objectMap.begin(); ii!=objectMap.end(); ++ii) {
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ for (map<VK_OBJECT, MT_OBJ_INFO*>::iterator ii=objectMap.begin(); ii!=objectMap.end(); ++ii) {
pInfo = (*ii).second;
sprintf(str, " ObjInfo %p has object %p, pMemObjInfo %p", pInfo, pInfo->object, pInfo->pMemObjInfo);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pInfo->object, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pInfo->object, 0, MEMTRACK_NONE, "MEM", str);
}
}
// For given Object, get 'mem' obj that it's bound to or NULL if no binding
-static XGL_GPU_MEMORY getMemBindingFromObject(const XGL_OBJECT object)
+static VK_GPU_MEMORY getMemBindingFromObject(const VK_OBJECT object)
{
- XGL_GPU_MEMORY mem = NULL;
+ VK_GPU_MEMORY mem = NULL;
MT_OBJ_INFO* pObjInfo = getObjectInfo(object);
if (pObjInfo) {
if (pObjInfo->pMemObjInfo) {
else {
char str[1024];
sprintf(str, "Trying to get mem binding for object %p but object has no mem binding", (void*)object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MISSING_MEM_BINDINGS, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MISSING_MEM_BINDINGS, "MEM", str);
printObjList();
}
}
else {
char str[1024];
sprintf(str, "Trying to get mem binding for object %p but no such object in global list", (void*)object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INVALID_OBJECT, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_INVALID_OBJECT, "MEM", str);
printObjList();
}
return mem;
// Just printing each msg individually for now, may want to package these into single large print
char str[1024];
sprintf(str, "MEM INFO : Details of Memory Object list of size %lu elements", memObjMap.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (map<XGL_GPU_MEMORY, MT_MEM_OBJ_INFO*>::iterator ii=memObjMap.begin(); ii!=memObjMap.end(); ++ii) {
+ for (map<VK_GPU_MEMORY, MT_MEM_OBJ_INFO*>::iterator ii=memObjMap.begin(); ii!=memObjMap.end(); ++ii) {
pInfo = (*ii).second;
sprintf(str, " ===MemObjInfo at %p===", (void*)pInfo);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
sprintf(str, " Mem object: %p", (void*)pInfo->mem);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
sprintf(str, " Ref Count: %u", pInfo->refCount);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
if (0 != pInfo->allocInfo.allocationSize) {
- string pAllocInfoMsg = xgl_print_xgl_memory_alloc_info(&pInfo->allocInfo, "{MEM}INFO : ");
+ string pAllocInfoMsg = vk_print_vk_memory_alloc_info(&pInfo->allocInfo, "{MEM}INFO : ");
sprintf(str, " Mem Alloc info:\n%s", pAllocInfoMsg.c_str());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
} else {
- sprintf(str, " Mem Alloc info is NULL (alloc done by xglWsiX11CreatePresentableImage())");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, " Mem Alloc info is NULL (alloc done by vkWsiX11CreatePresentableImage())");
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
}
- sprintf(str, " XGL OBJECT Binding list of size %lu elements:", pInfo->pObjBindings.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (list<XGL_OBJECT>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
- sprintf(str, " XGL OBJECT %p", (*it));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, " VK OBJECT Binding list of size %lu elements:", pInfo->pObjBindings.size());
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ for (list<VK_OBJECT>::iterator it = pInfo->pObjBindings.begin(); it != pInfo->pObjBindings.end(); ++it) {
+ sprintf(str, " VK OBJECT %p", (*it));
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
}
- sprintf(str, " XGL Command Buffer (CB) binding list of size %lu elements", pInfo->pCmdBufferBindings.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (list<XGL_CMD_BUFFER>::iterator it = pInfo->pCmdBufferBindings.begin(); it != pInfo->pCmdBufferBindings.end(); ++it) {
- sprintf(str, " XGL CB %p", (*it));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, " VK Command Buffer (CB) binding list of size %lu elements", pInfo->pCmdBufferBindings.size());
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ for (list<VK_CMD_BUFFER>::iterator it = pInfo->pCmdBufferBindings.begin(); it != pInfo->pCmdBufferBindings.end(); ++it) {
+ sprintf(str, " VK CB %p", (*it));
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
}
}
}
char str[1024] = {0};
MT_CB_INFO* pCBInfo = NULL;
sprintf(str, "Details of CB list of size %lu elements", cbMap.size());
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (map<XGL_CMD_BUFFER, MT_CB_INFO*>::iterator ii=cbMap.begin(); ii!=cbMap.end(); ++ii) {
+ for (map<VK_CMD_BUFFER, MT_CB_INFO*>::iterator ii=cbMap.begin(); ii!=cbMap.end(); ++ii) {
pCBInfo = (*ii).second;
sprintf(str, " CB Info (%p) has CB %p, fenceId %" PRIx64", and fence %p",
(void*)pCBInfo, (void*)pCBInfo->cmdBuffer, pCBInfo->fenceId,
(void*)getFenceFromId(pCBInfo->fenceId));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
- for (list<XGL_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
+ for (list<VK_GPU_MEMORY>::iterator it = pCBInfo->pMemObjList.begin(); it != pCBInfo->pMemObjList.end(); ++it) {
sprintf(str, " Mem obj %p", (*it));
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, MEMTRACK_NONE, "MEM", str);
}
}
}
getLayerOptionEnum("MemTrackerReportLevel", (uint32_t *) &g_reportingLevel);
g_actionIsDefault = getLayerOptionEnum("MemTrackerDebugAction", (uint32_t *) &g_debugAction);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
{
strOpt = getLayerOption("MemTrackerLogFilename");
if (strOpt)
// initialize Layer dispatch table
// TODO handle multiple GPUs
- xglGetProcAddrType fpNextGPA;
+ vkGetProcAddrType fpNextGPA;
fpNextGPA = pCurObj->pGPA;
assert(fpNextGPA);
- layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);
+ layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);
- xglGetProcAddrType fpGetProcAddr = (xglGetProcAddrType)fpNextGPA((XGL_PHYSICAL_GPU) pCurObj->nextObject, (char *) "xglGetProcAddr");
+ vkGetProcAddrType fpGetProcAddr = (vkGetProcAddrType)fpNextGPA((VK_PHYSICAL_GPU) pCurObj->nextObject, (char *) "vkGetProcAddr");
nextTable.GetProcAddr = fpGetProcAddr;
if (!globalLockInitialized)
{
// TODO/TBD: Need to delete this mutex sometime. How??? One
- // suggestion is to call this during xglCreateInstance(), and then we
- // can clean it up during xglDestroyInstance(). However, that requires
+ // suggestion is to call this during vkCreateInstance(), and then we
+ // can clean it up during vkDestroyInstance(). However, that requires
// that the layer have per-instance locks. We need to come back and
// address this soon.
loader_platform_thread_create_mutex(&globalLock);
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initMemTracker);
- XGL_RESULT result = nextTable.CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ VK_RESULT result = nextTable.CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
// Save off device in case we need it to create Fences
globalDevice = *pDevice;
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyDevice(VK_DEVICE device)
{
char str[1024];
- sprintf(str, "Printing List details prior to xglDestroyDevice()");
+ sprintf(str, "Printing List details prior to vkDestroyDevice()");
loader_platform_thread_lock_mutex(&globalLock);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, device, 0, MEMTRACK_NONE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, device, 0, MEMTRACK_NONE, "MEM", str);
printMemList();
printCBList();
printObjList();
- if (XGL_FALSE == deleteCBInfoList()) {
- sprintf(str, "Issue deleting global CB list in xglDestroyDevice()");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, device, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
+ if (VK_FALSE == deleteCBInfoList()) {
+ sprintf(str, "Issue deleting global CB list in vkDestroyDevice()");
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, device, 0, MEMTRACK_INTERNAL_ERROR, "MEM", str);
}
// Report any memory leaks
MT_MEM_OBJ_INFO* pInfo = NULL;
- for (map<XGL_GPU_MEMORY, MT_MEM_OBJ_INFO*>::iterator ii=memObjMap.begin(); ii!=memObjMap.end(); ++ii) {
+ for (map<VK_GPU_MEMORY, MT_MEM_OBJ_INFO*>::iterator ii=memObjMap.begin(); ii!=memObjMap.end(); ++ii) {
pInfo = (*ii).second;
if (pInfo->allocInfo.allocationSize != 0) {
- sprintf(str, "Mem Object %p has not been freed. You should clean up this memory by calling xglFreeMemory(%p) prior to xglDestroyDevice().",
+ sprintf(str, "Mem Object %p has not been freed. You should clean up this memory by calling vkFreeMemory(%p) prior to vkDestroyDevice().",
pInfo->mem, pInfo->mem);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, pInfo->mem, 0, MEMTRACK_MEMORY_LEAK, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, pInfo->mem, 0, MEMTRACK_MEMORY_LEAK, "MEM", str);
}
}
deleteQueueInfoList();
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.DestroyDevice(device);
+ VK_RESULT result = nextTable.DestroyDevice(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_RESULT result;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_RESULT result;
/* This entrypoint is NOT going to init its own dispatch table since loader calls here early */
if (!strcmp(pExtName, "MemTracker"))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (nextTable.GetExtensionSupport != NULL)
{
- result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
} else
{
- result = XGL_ERROR_INVALID_EXTENSION;
+ result = VK_ERROR_INVALID_EXTENSION;
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount,
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount,
size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
if (gpu != NULL)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initMemTracker);
- XGL_RESULT result = nextTable.EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount,
+ VK_RESULT result = nextTable.EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount,
maxStringSize, pOutLayerCount, pOutLayers, pReserved);
return result;
} else
{
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// This layer compatible with all GPUs
*pOutLayerCount = 1;
strncpy((char *) pOutLayers[0], "MemTracker", maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(XGL_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, XGL_QUEUE* pQueue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(VK_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, VK_QUEUE* pQueue)
{
- XGL_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
addQueueInfo(*pQueue);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueAddMemReference(XGL_QUEUE queue, XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueAddMemReference(VK_QUEUE queue, VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.QueueAddMemReference(queue, mem);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.QueueAddMemReference(queue, mem);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
if (pQueueInfo == NULL) {
char str[1024];
sprintf(str, "Unknown Queue %p", queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
}
else {
- if (checkMemRef(queue, mem) == XGL_TRUE) {
+ if (checkMemRef(queue, mem) == VK_TRUE) {
// Alread in list, just warn
char str[1024];
sprintf(str, "Request to add a memory reference (%p) to Queue %p -- ref is already present in the queue's reference list", mem, queue);
- layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
}
else {
// Add to queue's memory reference list
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueRemoveMemReference(XGL_QUEUE queue, XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueRemoveMemReference(VK_QUEUE queue, VK_GPU_MEMORY mem)
{
// TODO : Decrement ref count for this memory reference on this queue. Remove if ref count is zero.
- XGL_RESULT result = nextTable.QueueRemoveMemReference(queue, mem);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.QueueRemoveMemReference(queue, mem);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
MT_QUEUE_INFO *pQueueInfo = queueMap[queue];
if (pQueueInfo == NULL) {
char str[1024];
sprintf(str, "Unknown Queue %p", queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_QUEUE, "MEM", str);
}
else {
- for (list<XGL_GPU_MEMORY>::iterator it = pQueueInfo->pMemRefList.begin(); it != pQueueInfo->pMemRefList.end(); ++it) {
+ for (list<VK_GPU_MEMORY>::iterator it = pQueueInfo->pMemRefList.begin(); it != pQueueInfo->pMemRefList.end(); ++it) {
if ((*it) == mem) {
it = pQueueInfo->pMemRefList.erase(it);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(
- XGL_QUEUE queue,
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSubmit(
+ VK_QUEUE queue,
uint32_t cmdBufferCount,
- const XGL_CMD_BUFFER *pCmdBuffers,
- XGL_FENCE fence)
+ const VK_CMD_BUFFER *pCmdBuffers,
+ VK_FENCE fence)
{
loader_platform_thread_lock_mutex(&globalLock);
// TODO : Need to track fence and clear mem references when fence clears
pCBInfo->fenceId = fenceId;
}
- if (XGL_FALSE == validateQueueMemRefs(queue, cmdBufferCount, pCmdBuffers)) {
+ if (VK_FALSE == validateQueueMemRefs(queue, cmdBufferCount, pCmdBuffers)) {
char str[1024];
sprintf(str, "Unable to verify memory references for Queue %p", queue);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_INVALID_MEM_REF, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, getFenceFromId(fenceId));
+ VK_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, getFenceFromId(fenceId));
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocMemory(XGL_DEVICE device, const XGL_MEMORY_ALLOC_INFO* pAllocInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocMemory(VK_DEVICE device, const VK_MEMORY_ALLOC_INFO* pAllocInfo, VK_GPU_MEMORY* pMem)
{
- XGL_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
+ VK_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
// TODO : Track allocations and overall size here
loader_platform_thread_lock_mutex(&globalLock);
addMemObjInfo(*pMem, pAllocInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglFreeMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkFreeMemory(VK_GPU_MEMORY mem)
{
- /* From spec : A memory object is freed by calling xglFreeMemory() when it is no longer needed. Before
+ /* From spec : A memory object is freed by calling vkFreeMemory() when it is no longer needed. Before
* freeing a memory object, an application must ensure the memory object is unbound from
* all API objects referencing it and that it is not referenced by any queued command buffers
*/
loader_platform_thread_lock_mutex(&globalLock);
- if (XGL_FALSE == freeMemObjInfo(mem, false)) {
+ if (VK_FALSE == freeMemObjInfo(mem, false)) {
char str[1024];
sprintf(str, "Issue while freeing mem obj %p", (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_FREE_MEM_ERROR, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_FREE_MEM_ERROR, "MEM", str);
}
printMemList();
printObjList();
printCBList();
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.FreeMemory(mem);
+ VK_RESULT result = nextTable.FreeMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(XGL_GPU_MEMORY mem, XGL_MEMORY_PRIORITY priority)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(VK_GPU_MEMORY mem, VK_MEMORY_PRIORITY priority)
{
// TODO : Update tracking for this alloc
// Make sure memory is not pinned, which can't have priority set
- XGL_RESULT result = nextTable.SetMemoryPriority(mem, priority);
+ VK_RESULT result = nextTable.SetMemoryPriority(mem, priority);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglMapMemory(XGL_GPU_MEMORY mem, XGL_FLAGS flags, void** ppData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkMapMemory(VK_GPU_MEMORY mem, VK_FLAGS flags, void** ppData)
{
// TODO : Track when memory is mapped
loader_platform_thread_lock_mutex(&globalLock);
MT_MEM_OBJ_INFO *pMemObj = getMemObjInfo(mem);
- if ((pMemObj->allocInfo.memProps & XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT) == 0) {
+ if ((pMemObj->allocInfo.memProps & VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT) == 0) {
char str[1024];
- sprintf(str, "Mapping Memory (%p) without XGL_MEMORY_PROPERTY_CPU_VISIBLE_BIT set", (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_STATE, "MEM", str);
+ sprintf(str, "Mapping Memory (%p) without VK_MEMORY_PROPERTY_CPU_VISIBLE_BIT set", (void*)mem);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, mem, 0, MEMTRACK_INVALID_STATE, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.MapMemory(mem, flags, ppData);
+ VK_RESULT result = nextTable.MapMemory(mem, flags, ppData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkUnmapMemory(VK_GPU_MEMORY mem)
{
// TODO : Track as memory gets unmapped, do we want to check what changed following map?
// Make sure that memory was ever mapped to begin with
- XGL_RESULT result = nextTable.UnmapMemory(mem);
+ VK_RESULT result = nextTable.UnmapMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(XGL_DEVICE device, const void* pSysMem, size_t memSize, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkPinSystemMemory(VK_DEVICE device, const void* pSysMem, size_t memSize, VK_GPU_MEMORY* pMem)
{
// TODO : Track this
// Verify that memory is actually pinnable
- XGL_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
+ VK_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(XGL_DEVICE device, const XGL_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(VK_DEVICE device, const VK_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
// TODO : Track this
- XGL_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(XGL_DEVICE device, const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(VK_DEVICE device, const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
// TODO : Track this
- XGL_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(XGL_DEVICE device, const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerImage(VK_DEVICE device, const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
// TODO : Track this
- XGL_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
+ VK_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyObject(XGL_OBJECT object)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyObject(VK_OBJECT object)
{
loader_platform_thread_lock_mutex(&globalLock);
// First check if this is a CmdBuffer
- if (NULL != getCBInfo((XGL_CMD_BUFFER)object)) {
- deleteCBInfo((XGL_CMD_BUFFER)object);
+ if (NULL != getCBInfo((VK_CMD_BUFFER)object)) {
+ deleteCBInfo((VK_CMD_BUFFER)object);
}
if (objectMap.find(object) != objectMap.end()) {
if (pDelInfo->pMemObjInfo) {
// Wsi allocated Memory is tied to image object so clear the binding and free that memory automatically
if (0 == pDelInfo->pMemObjInfo->allocInfo.allocationSize) { // Wsi allocated memory has NULL allocInfo w/ 0 size
- XGL_GPU_MEMORY memToFree = pDelInfo->pMemObjInfo->mem;
+ VK_GPU_MEMORY memToFree = pDelInfo->pMemObjInfo->mem;
clearObjectBinding(object);
freeMemObjInfo(memToFree, true);
}
else {
char str[1024];
- sprintf(str, "Destroying obj %p that is still bound to memory object %p\nYou should first clear binding by calling xglBindObjectMemory(%p, 0, XGL_NULL_HANDLE, 0)", object, (void*)pDelInfo->pMemObjInfo->mem, object);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_DESTROY_OBJECT_ERROR, "MEM", str);
+ sprintf(str, "Destroying obj %p that is still bound to memory object %p\nYou should first clear binding by calling vkBindObjectMemory(%p, 0, VK_NULL_HANDLE, 0)", object, (void*)pDelInfo->pMemObjInfo->mem, object);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_DESTROY_OBJECT_ERROR, "MEM", str);
// From the spec : If an object has previous memory binding, it is required to unbind memory from an API object before it is destroyed.
clearObjectBinding(object);
}
}
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.DestroyObject(object);
+ VK_RESULT result = nextTable.DestroyObject(object);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(XGL_BASE_OBJECT object, XGL_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetObjectInfo(VK_BASE_OBJECT object, VK_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
// TODO : What to track here?
// Could potentially save returned mem requirements and validate values passed into BindObjectMemory for this object
// From spec : The only objects that are guaranteed to have no external memory requirements are devices, queues, command buffers, shaders and memory objects.
- XGL_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_MEMORY mem, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemory(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_MEMORY mem, VK_GPU_SIZE offset)
{
- XGL_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
+ VK_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
loader_platform_thread_lock_mutex(&globalLock);
// Track objects tied to memory
- if (XGL_FALSE == updateObjectBinding(object, mem)) {
+ if (VK_FALSE == updateObjectBinding(object, mem)) {
char str[1024];
sprintf(str, "Unable to set object %p binding to mem obj %p", (void*)object, (void*)mem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
printObjList();
printMemList();
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFence(XGL_DEVICE device, const XGL_FENCE_CREATE_INFO* pCreateInfo, XGL_FENCE* pFence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFence(VK_DEVICE device, const VK_FENCE_CREATE_INFO* pCreateInfo, VK_FENCE* pFence)
{
- XGL_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pFence, pCreateInfo->sType, pCreateInfo, sizeof(XGL_FENCE_CREATE_INFO), "fence");
+ addObjectInfo(*pFence, pCreateInfo->sType, pCreateInfo, sizeof(VK_FENCE_CREATE_INFO), "fence");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetFences(XGL_DEVICE device, uint32_t fenceCount, XGL_FENCE* pFences)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetFences(VK_DEVICE device, uint32_t fenceCount, VK_FENCE* pFences)
{
- XGL_RESULT result = nextTable.ResetFences(device, fenceCount, pFences);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.ResetFences(device, fenceCount, pFences);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
// Reset fence state in fenceCreateInfo structure
for (uint32_t i = 0; i < fenceCount; i++) {
MT_OBJ_INFO* pObjectInfo = getObjectInfo(pFences[i]);
if (pObjectInfo != NULL) {
pObjectInfo->create_info.fence_create_info.flags =
- static_cast<XGL_FENCE_CREATE_FLAGS>(pObjectInfo->create_info.fence_create_info.flags & ~XGL_FENCE_CREATE_SIGNALED_BIT);
+ static_cast<VK_FENCE_CREATE_FLAGS>(pObjectInfo->create_info.fence_create_info.flags & ~VK_FENCE_CREATE_SIGNALED_BIT);
}
}
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFenceStatus(VK_FENCE fence)
{
- XGL_RESULT result = nextTable.GetFenceStatus(fence);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.GetFenceStatus(fence);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
updateFenceTracking(fence);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWaitForFences(XGL_DEVICE device, uint32_t fenceCount, const XGL_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWaitForFences(VK_DEVICE device, uint32_t fenceCount, const VK_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
{
// Verify fence status of submitted fences
for(uint32_t i = 0; i < fenceCount; i++) {
MT_OBJ_INFO* pObjectInfo = getObjectInfo(pFences[i]);
if (pObjectInfo != NULL) {
- if (pObjectInfo->create_info.fence_create_info.flags == XGL_FENCE_CREATE_SIGNALED_BIT) {
+ if (pObjectInfo->create_info.fence_create_info.flags == VK_FENCE_CREATE_SIGNALED_BIT) {
char str[1024];
- sprintf(str, "xglWaitForFences specified signaled-state Fence %p. Fences must be reset before being submitted", pFences[i]);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pFences[i], 0, MEMTRACK_INVALID_FENCE_STATE, "MEM", str);
+ sprintf(str, "vkWaitForFences specified signaled-state Fence %p. Fences must be reset before being submitted", pFences[i]);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pFences[i], 0, MEMTRACK_INVALID_FENCE_STATE, "MEM", str);
}
}
}
- XGL_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+ VK_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
loader_platform_thread_lock_mutex(&globalLock);
- if (XGL_SUCCESS == result) {
+ if (VK_SUCCESS == result) {
if (waitAll || fenceCount == 1) { // Clear all the fences
for(uint32_t i = 0; i < fenceCount; i++) {
updateFenceTracking(pFences[i]);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(XGL_QUEUE queue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(VK_QUEUE queue)
{
- XGL_RESULT result = nextTable.QueueWaitIdle(queue);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.QueueWaitIdle(queue);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
retireQueueFences(queue);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(VK_DEVICE device)
{
- XGL_RESULT result = nextTable.DeviceWaitIdle(device);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.DeviceWaitIdle(device);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
retireDeviceFences(device);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateEvent(XGL_DEVICE device, const XGL_EVENT_CREATE_INFO* pCreateInfo, XGL_EVENT* pEvent)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateEvent(VK_DEVICE device, const VK_EVENT_CREATE_INFO* pCreateInfo, VK_EVENT* pEvent)
{
- XGL_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pEvent, pCreateInfo->sType, pCreateInfo, sizeof(XGL_EVENT_CREATE_INFO), "event");
+ addObjectInfo(*pEvent, pCreateInfo->sType, pCreateInfo, sizeof(VK_EVENT_CREATE_INFO), "event");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(XGL_DEVICE device, const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo, XGL_QUERY_POOL* pQueryPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateQueryPool(VK_DEVICE device, const VK_QUERY_POOL_CREATE_INFO* pCreateInfo, VK_QUERY_POOL* pQueryPool)
{
- XGL_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pQueryPool, pCreateInfo->sType, pCreateInfo, sizeof(XGL_QUERY_POOL_CREATE_INFO), "query_pool");
+ addObjectInfo(*pQueryPool, pCreateInfo->sType, pCreateInfo, sizeof(VK_QUERY_POOL_CREATE_INFO), "query_pool");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(XGL_DEVICE device, const XGL_BUFFER_CREATE_INFO* pCreateInfo, XGL_BUFFER* pBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBuffer(VK_DEVICE device, const VK_BUFFER_CREATE_INFO* pCreateInfo, VK_BUFFER* pBuffer)
{
- XGL_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pBuffer, pCreateInfo->sType, pCreateInfo, sizeof(XGL_BUFFER_CREATE_INFO), "buffer");
+ addObjectInfo(*pBuffer, pCreateInfo->sType, pCreateInfo, sizeof(VK_BUFFER_CREATE_INFO), "buffer");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBufferView(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(XGL_BUFFER_VIEW_CREATE_INFO), "buffer_view");
+ addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(VK_BUFFER_VIEW_CREATE_INFO), "buffer_view");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImage(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImage(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage)
{
- XGL_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
- if (XGL_SUCCESS == result) {
+ VK_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
+ if (VK_SUCCESS == result) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pImage, pCreateInfo->sType, pCreateInfo, sizeof(XGL_IMAGE_CREATE_INFO), "image");
+ addObjectInfo(*pImage, pCreateInfo->sType, pCreateInfo, sizeof(VK_IMAGE_CREATE_INFO), "image");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImageView(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImageView(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(XGL_IMAGE_VIEW_CREATE_INFO), "image_view");
+ addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(VK_IMAGE_VIEW_CREATE_INFO), "image_view");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(XGL_DEVICE device, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
- XGL_COLOR_ATTACHMENT_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(VK_DEVICE device, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo,
+ VK_COLOR_ATTACHMENT_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO), "color_attachment_view");
+ addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO), "color_attachment_view");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(XGL_DEVICE device, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, XGL_DEPTH_STENCIL_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(VK_DEVICE device, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, VK_DEPTH_STENCIL_VIEW* pView)
{
- XGL_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DEPTH_STENCIL_VIEW_CREATE_INFO), "ds_view");
+ addObjectInfo(*pView, pCreateInfo->sType, pCreateInfo, sizeof(VK_DEPTH_STENCIL_VIEW_CREATE_INFO), "ds_view");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateShader(XGL_DEVICE device, const XGL_SHADER_CREATE_INFO* pCreateInfo, XGL_SHADER* pShader)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateShader(VK_DEVICE device, const VK_SHADER_CREATE_INFO* pCreateInfo, VK_SHADER* pShader)
{
- XGL_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
+ VK_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(XGL_GRAPHICS_PIPELINE_CREATE_INFO), "graphics_pipeline");
+ addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(VK_GRAPHICS_PIPELINE_CREATE_INFO), "graphics_pipeline");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(
- XGL_DEVICE device,
- const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE basePipeline,
- XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(
+ VK_DEVICE device,
+ const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE basePipeline,
+ VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(XGL_GRAPHICS_PIPELINE_CREATE_INFO), "graphics_pipeline");
+ addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(VK_GRAPHICS_PIPELINE_CREATE_INFO), "graphics_pipeline");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(XGL_DEVICE device, const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(VK_DEVICE device, const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(XGL_COMPUTE_PIPELINE_CREATE_INFO), "compute_pipeline");
+ addObjectInfo(*pPipeline, pCreateInfo->sType, pCreateInfo, sizeof(VK_COMPUTE_PIPELINE_CREATE_INFO), "compute_pipeline");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSampler(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSampler(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler)
{
- XGL_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pSampler, pCreateInfo->sType, pCreateInfo, sizeof(XGL_SAMPLER_CREATE_INFO), "sampler");
+ addObjectInfo(*pSampler, pCreateInfo->sType, pCreateInfo, sizeof(VK_SAMPLER_CREATE_INFO), "sampler");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DYNAMIC_VP_STATE_CREATE_INFO), "viewport_state");
+ addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(VK_DYNAMIC_VP_STATE_CREATE_INFO), "viewport_state");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DYNAMIC_RS_STATE_CREATE_INFO), "raster_state");
+ addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(VK_DYNAMIC_RS_STATE_CREATE_INFO), "raster_state");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DYNAMIC_CB_STATE_CREATE_INFO), "cb_state");
+ addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(VK_DYNAMIC_CB_STATE_CREATE_INFO), "cb_state");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
- XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo,
+ VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
- XGL_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
- if (result == XGL_SUCCESS) {
+ VK_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
+ if (result == VK_SUCCESS) {
loader_platform_thread_lock_mutex(&globalLock);
- addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(XGL_DYNAMIC_DS_STATE_CREATE_INFO), "ds_state");
+ addObjectInfo(*pState, pCreateInfo->sType, pCreateInfo, sizeof(VK_DYNAMIC_DS_STATE_CREATE_INFO), "ds_state");
loader_platform_thread_unlock_mutex(&globalLock);
}
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer)
{
- XGL_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ VK_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
// At time of cmd buffer creation, create global cmd buffer info for the returned cmd buffer
loader_platform_thread_lock_mutex(&globalLock);
if (*pCmdBuffer)
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
// This implicitly resets the Cmd Buffer so make sure any fence is done and then clear memory references
MT_CB_INFO* pCBInfo = getCBInfo(cmdBuffer);
if (pCBInfo && (!fenceRetired(pCBInfo->fenceId))) {
bool32_t cbDone = checkCBCompleted(cmdBuffer);
- if (XGL_FALSE == cbDone) {
+ if (VK_FALSE == cbDone) {
char str[1024];
- sprintf(str, "Calling xglBeginCommandBuffer() on active CB %p before it has completed. You must check CB flag before this call.", cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", str);
+ sprintf(str, "Calling vkBeginCommandBuffer() on active CB %p before it has completed. You must check CB flag before this call.", cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", str);
}
}
- XGL_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ VK_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
loader_platform_thread_lock_mutex(&globalLock);
freeCBBindings(cmdBuffer);
loader_platform_thread_unlock_mutex(&globalLock);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
// TODO : Anything to do here?
- XGL_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
// Verify that CB is complete (not in-flight)
MT_CB_INFO* pCBInfo = getCBInfo(cmdBuffer);
if (pCBInfo && (!fenceRetired(pCBInfo->fenceId))) {
bool32_t cbDone = checkCBCompleted(cmdBuffer);
- if (XGL_FALSE == cbDone) {
+ if (VK_FALSE == cbDone) {
char str[1024];
- sprintf(str, "Resetting CB %p before it has completed. You must check CB flag before calling xglResetCommandBuffer().", cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", str);
+ sprintf(str, "Resetting CB %p before it has completed. You must check CB flag before calling vkResetCommandBuffer().", cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_RESET_CB_WHILE_IN_FLIGHT, "MEM", str);
}
}
// Clear memory references as this point.
loader_platform_thread_lock_mutex(&globalLock);
freeCBBindings(cmdBuffer);
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
return result;
}
-// TODO : For any xglCmdBind* calls that include an object which has mem bound to it,
+// TODO : For any vkCmdBind* calls that include an object which has mem bound to it,
// need to account for that mem now having binding to given cmdBuffer
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindPipeline(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline)
+VK_LAYER_EXPORT void VKAPI vkCmdBindPipeline(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline)
{
#if 0
// TODO : If memory bound to pipeline, then need to tie that mem to cmdBuffer
} else {
char str[1024];
sprintf(str, "Attempt to bind Pipeline %p to non-existant command buffer %p!", (void*)pipeline, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_INVALID_CB, (char *) "DS", (char *) str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_INVALID_CB, (char *) "DS", (char *) str);
}
}
else {
char str[1024];
sprintf(str, "Attempt to bind Pipeline %p that doesn't exist!", (void*)pipeline);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pipeline, 0, MEMTRACK_INVALID_OBJECT, (char *) "DS", (char *) str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pipeline, 0, MEMTRACK_INVALID_OBJECT, (char *) "DS", (char *) str);
}
#endif
nextTable.CmdBindPipeline(cmdBuffer, pipelineBindPoint, pipeline);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDynamicStateObject(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDynamicStateObject(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state)
{
MT_OBJ_INFO *pObjInfo;
loader_platform_thread_lock_mutex(&globalLock);
if (!pCmdBuf) {
char str[1024];
sprintf(str, "Unable to find command buffer object %p, was it ever created?", (void*)cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_INVALID_CB, "DD", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_INVALID_CB, "DD", str);
}
pObjInfo = getObjectInfo(state);
if (!pObjInfo) {
char str[1024];
sprintf(str, "Unable to find dynamic state object %p, was it ever created?", (void*)state);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, state, 0, MEMTRACK_INVALID_OBJECT, "DD", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, state, 0, MEMTRACK_INVALID_OBJECT, "DD", str);
}
pCmdBuf->pDynamicState[stateBindPoint] = pObjInfo;
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdBindDynamicStateObject(cmdBuffer, stateBindPoint, state);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDescriptorSets(
- XGL_CMD_BUFFER cmdBuffer,
- XGL_PIPELINE_BIND_POINT pipelineBindPoint,
- XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
+VK_LAYER_EXPORT void VKAPI vkCmdBindDescriptorSets(
+ VK_CMD_BUFFER cmdBuffer,
+ VK_PIPELINE_BIND_POINT pipelineBindPoint,
+ VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain,
uint32_t layoutChainSlot,
uint32_t count,
- const XGL_DESCRIPTOR_SET* pDescriptorSets,
+ const VK_DESCRIPTOR_SET* pDescriptorSets,
const uint32_t* pUserData)
{
// TODO : Somewhere need to verify that all textures referenced by shaders in DS are in some type of *SHADER_READ* state
nextTable.CmdBindDescriptorSets(cmdBuffer, pipelineBindPoint, layoutChain, layoutChainSlot, count, pDescriptorSets, pUserData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindVertexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding)
+VK_LAYER_EXPORT void VKAPI vkCmdBindVertexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding)
{
nextTable.CmdBindVertexBuffer(cmdBuffer, buffer, offset, binding);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindIndexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType)
+VK_LAYER_EXPORT void VKAPI vkCmdBindIndexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType)
{
nextTable.CmdBindIndexBuffer(cmdBuffer, buffer, offset, indexType);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(buffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(buffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdDrawIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdDrawIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdDrawIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexedIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(buffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(buffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdDrawIndexedIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdDrawIndexedIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdDrawIndexedIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatchIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatchIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(buffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(buffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdDispatchIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdDispatchIndirect() call unable to update binding of buffer %p to cmdBuffer %p", buffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdDispatchIndirect(cmdBuffer, buffer, offset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer,
- uint32_t regionCount, const XGL_BUFFER_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer,
+ uint32_t regionCount, const VK_BUFFER_COPY* pRegions)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(srcBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(srcBuffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyBuffer() call unable to update binding of srcBuffer %p to cmdBuffer %p", srcBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyBuffer() call unable to update binding of srcBuffer %p to cmdBuffer %p", srcBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(destBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyBuffer() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyBuffer() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdCopyBuffer(cmdBuffer, srcBuffer, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_IMAGE_COPY* pRegions)
{
// TODO : Each image will have mem mapping so track them
nextTable.CmdCopyImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBlitImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_IMAGE_BLIT* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdBlitImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_IMAGE_BLIT* pRegions)
{
// TODO : Each image will have mem mapping so track them
nextTable.CmdBlitImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBufferToImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_BUFFER srcBuffer,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBufferToImage(VK_CMD_BUFFER cmdBuffer,
+ VK_BUFFER srcBuffer,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
// TODO : Track this
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(destImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(destImage);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyMemoryToImage() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyMemoryToImage() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(srcBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyMemoryToImage() call unable to update binding of srcBuffer %p to cmdBuffer %p", srcBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyMemoryToImage() call unable to update binding of srcBuffer %p to cmdBuffer %p", srcBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdCopyBufferToImage(cmdBuffer, srcBuffer, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImageToBuffer(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_BUFFER destBuffer,
- uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImageToBuffer(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_BUFFER destBuffer,
+ uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
// TODO : Track this
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyImageToMemory() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyImageToMemory() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(destBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCopyImageToMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCopyImageToMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdCopyImageToBuffer(cmdBuffer, srcImage, srcImageLayout, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCloneImageData(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout)
+VK_LAYER_EXPORT void VKAPI vkCmdCloneImageData(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout)
{
// TODO : Each image will have mem mapping so track them
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCloneImageData() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCloneImageData() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(destImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdCloneImageData() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdCloneImageData() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdCloneImageData(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdUpdateBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdUpdateBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(destBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(destBuffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdUpdateMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdUpdateMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdUpdateBuffer(cmdBuffer, destBuffer, destOffset, dataSize, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdFillBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data)
+VK_LAYER_EXPORT void VKAPI vkCmdFillBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(destBuffer);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(destBuffer);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdFillMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdFillMemory() call unable to update binding of destBuffer %p to cmdBuffer %p", destBuffer, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdFillBuffer(cmdBuffer, destBuffer, destOffset, fillSize, data);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearColorImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout,
- XGL_CLEAR_COLOR color,
- uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearColorImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout,
+ VK_CLEAR_COLOR color,
+ uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
- // TODO : Verify memory is in XGL_IMAGE_STATE_CLEAR state
+ // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(image);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(image);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdClearColorImage() call unable to update binding of image buffer %p to cmdBuffer %p", image, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdClearColorImage() call unable to update binding of image buffer %p to cmdBuffer %p", image, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdClearColorImage(cmdBuffer, image, imageLayout, color, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearDepthStencil(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout,
+VK_LAYER_EXPORT void VKAPI vkCmdClearDepthStencil(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout,
float depth, uint32_t stencil,
- uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+ uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
- // TODO : Verify memory is in XGL_IMAGE_STATE_CLEAR state
+ // TODO : Verify memory is in VK_IMAGE_STATE_CLEAR state
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(image);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(image);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdClearDepthStencil() call unable to update binding of image buffer %p to cmdBuffer %p", image, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdClearDepthStencil() call unable to update binding of image buffer %p to cmdBuffer %p", image, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdClearDepthStencil(cmdBuffer, image, imageLayout, depth, stencil, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResolveImage(XGL_CMD_BUFFER cmdBuffer,
- XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout,
- XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout,
- uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects)
+VK_LAYER_EXPORT void VKAPI vkCmdResolveImage(VK_CMD_BUFFER cmdBuffer,
+ VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout,
+ VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout,
+ uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(srcImage);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdResolveImage() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdResolveImage() call unable to update binding of srcImage buffer %p to cmdBuffer %p", srcImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
mem = getMemBindingFromObject(destImage);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdResolveImage() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdResolveImage() call unable to update binding of destImage buffer %p to cmdBuffer %p", destImage, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdResolveImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, rectCount, pRects);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdBeginQuery() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdBeginQuery() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdBeginQuery(cmdBuffer, queryPool, slot, flags);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot)
+VK_LAYER_EXPORT void VKAPI vkCmdEndQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdEndQuery() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdEndQuery() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdEndQuery(cmdBuffer, queryPool, slot);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetQueryPool(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
+VK_LAYER_EXPORT void VKAPI vkCmdResetQueryPool(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
{
loader_platform_thread_lock_mutex(&globalLock);
- XGL_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
- if (XGL_FALSE == updateCBBinding(cmdBuffer, mem)) {
+ VK_GPU_MEMORY mem = getMemBindingFromObject(queryPool);
+ if (VK_FALSE == updateCBBinding(cmdBuffer, mem)) {
char str[1024];
- sprintf(str, "In xglCmdResetQueryPool() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkCmdResetQueryPool() call unable to update binding of queryPool buffer %p to cmdBuffer %p", queryPool, cmdBuffer);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, cmdBuffer, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
loader_platform_thread_unlock_mutex(&globalLock);
nextTable.CmdResetQueryPool(cmdBuffer, queryPool, startQuery, queryCount);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
// This layer intercepts callbacks
- XGL_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));
+ VK_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));
if (!pNewDbgFuncNode)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;
pNewDbgFuncNode->pUserData = pUserData;
pNewDbgFuncNode->pNext = g_pDbgFunctionHead;
g_pDbgFunctionHead = pNewDbgFuncNode;
// force callbacks if DebugAction hasn't been set already other than initial value
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;
+ g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;
}
- XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
+ VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_LAYER_DBG_FUNCTION_NODE *pInfo = g_pDbgFunctionHead;
- XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pInfo;
+ VK_LAYER_DBG_FUNCTION_NODE *pInfo = g_pDbgFunctionHead;
+ VK_LAYER_DBG_FUNCTION_NODE *pPrev = pInfo;
while (pInfo) {
if (pInfo->pfnMsgCallback == pfnMsgCallback) {
pPrev->pNext = pInfo->pNext;
if (g_pDbgFunctionHead == NULL)
{
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+ g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
} else {
- g_debugAction = (XGL_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)XGL_DBG_LAYER_ACTION_CALLBACK));
+ g_debugAction = (VK_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)VK_DBG_LAYER_ACTION_CALLBACK));
}
}
- XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
+ VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
return result;
}
#if !defined(WIN32)
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(XGL_DEVICE device, const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
- XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11CreatePresentableImage(VK_DEVICE device, const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo,
+ VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
- XGL_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
+ VK_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
loader_platform_thread_lock_mutex(&globalLock);
- if (XGL_SUCCESS == result) {
+ if (VK_SUCCESS == result) {
// Add image object, then insert the new Mem Object and then bind it to created image
- addObjectInfo(*pImage, _XGL_STRUCTURE_TYPE_MAX_ENUM, pCreateInfo, sizeof(XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO), "wsi_x11_image");
+ addObjectInfo(*pImage, _VK_STRUCTURE_TYPE_MAX_ENUM, pCreateInfo, sizeof(VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO), "wsi_x11_image");
addMemObjInfo(*pMem, NULL);
- if (XGL_FALSE == updateObjectBinding(*pImage, *pMem)) {
+ if (VK_FALSE == updateObjectBinding(*pImage, *pMem)) {
char str[1024];
- sprintf(str, "In xglWsiX11CreatePresentableImage(), unable to set image %p binding to mem obj %p", (void*)*pImage, (void*)*pMem);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, *pImage, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
+ sprintf(str, "In vkWsiX11CreatePresentableImage(), unable to set image %p binding to mem obj %p", (void*)*pImage, (void*)*pMem);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, *pImage, 0, MEMTRACK_MEMORY_BINDING_ERROR, "MEM", str);
}
}
printObjList();
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11QueuePresent(XGL_QUEUE queue, const XGL_WSI_X11_PRESENT_INFO* pPresentInfo, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11QueuePresent(VK_QUEUE queue, const VK_WSI_X11_PRESENT_INFO* pPresentInfo, VK_FENCE fence)
{
loader_platform_thread_lock_mutex(&globalLock);
addFenceInfo(fence, queue);
char str[1024];
- sprintf(str, "In xglWsiX11QueuePresent(), checking queue %p for fence %p", queue, fence);
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_NONE, "MEM", str);
+ sprintf(str, "In vkWsiX11QueuePresent(), checking queue %p for fence %p", queue, fence);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, queue, 0, MEMTRACK_NONE, "MEM", str);
loader_platform_thread_unlock_mutex(&globalLock);
- XGL_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
+ VK_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
return result;
}
#endif // WIN32
-XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)
+VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpu == NULL)
return NULL;
pCurObj = gpuw;
loader_platform_thread_once(&g_initOnce, initMemTracker);
- if (!strcmp(funcName, "xglGetProcAddr"))
- return (void *) xglGetProcAddr;
- if (!strcmp(funcName, "xglCreateDevice"))
- return (void*) xglCreateDevice;
- if (!strcmp(funcName, "xglDestroyDevice"))
- return (void*) xglDestroyDevice;
- if (!strcmp(funcName, "xglGetExtensionSupport"))
- return (void*) xglGetExtensionSupport;
- if (!strcmp(funcName, "xglEnumerateLayers"))
- return (void*) xglEnumerateLayers;
- if (!strcmp(funcName, "xglQueueSubmit"))
- return (void*) xglQueueSubmit;
- if (!strcmp(funcName, "xglAllocMemory"))
- return (void*) xglAllocMemory;
- if (!strcmp(funcName, "xglFreeMemory"))
- return (void*) xglFreeMemory;
- if (!strcmp(funcName, "xglSetMemoryPriority"))
- return (void*) xglSetMemoryPriority;
- if (!strcmp(funcName, "xglMapMemory"))
- return (void*) xglMapMemory;
- if (!strcmp(funcName, "xglUnmapMemory"))
- return (void*) xglUnmapMemory;
- if (!strcmp(funcName, "xglPinSystemMemory"))
- return (void*) xglPinSystemMemory;
- if (!strcmp(funcName, "xglOpenSharedMemory"))
- return (void*) xglOpenSharedMemory;
- if (!strcmp(funcName, "xglOpenPeerMemory"))
- return (void*) xglOpenPeerMemory;
- if (!strcmp(funcName, "xglOpenPeerImage"))
- return (void*) xglOpenPeerImage;
- if (!strcmp(funcName, "xglDestroyObject"))
- return (void*) xglDestroyObject;
- if (!strcmp(funcName, "xglGetObjectInfo"))
- return (void*) xglGetObjectInfo;
- if (!strcmp(funcName, "xglBindObjectMemory"))
- return (void*) xglBindObjectMemory;
- if (!strcmp(funcName, "xglCreateFence"))
- return (void*) xglCreateFence;
- if (!strcmp(funcName, "xglGetFenceStatus"))
- return (void*) xglGetFenceStatus;
- if (!strcmp(funcName, "xglResetFences"))
- return (void*) xglResetFences;
- if (!strcmp(funcName, "xglWaitForFences"))
- return (void*) xglWaitForFences;
- if (!strcmp(funcName, "xglQueueWaitIdle"))
- return (void*) xglQueueWaitIdle;
- if (!strcmp(funcName, "xglDeviceWaitIdle"))
- return (void*) xglDeviceWaitIdle;
- if (!strcmp(funcName, "xglCreateEvent"))
- return (void*) xglCreateEvent;
- if (!strcmp(funcName, "xglCreateQueryPool"))
- return (void*) xglCreateQueryPool;
- if (!strcmp(funcName, "xglCreateBuffer"))
- return (void*) xglCreateBuffer;
- if (!strcmp(funcName, "xglCreateBufferView"))
- return (void*) xglCreateBufferView;
- if (!strcmp(funcName, "xglCreateImage"))
- return (void*) xglCreateImage;
- if (!strcmp(funcName, "xglCreateImageView"))
- return (void*) xglCreateImageView;
- if (!strcmp(funcName, "xglCreateColorAttachmentView"))
- return (void*) xglCreateColorAttachmentView;
- if (!strcmp(funcName, "xglCreateDepthStencilView"))
- return (void*) xglCreateDepthStencilView;
- if (!strcmp(funcName, "xglCreateShader"))
- return (void*) xglCreateShader;
- if (!strcmp(funcName, "xglCreateGraphicsPipeline"))
- return (void*) xglCreateGraphicsPipeline;
- if (!strcmp(funcName, "xglCreateGraphicsPipelineDerivative"))
- return (void*) xglCreateGraphicsPipelineDerivative;
- if (!strcmp(funcName, "xglCreateComputePipeline"))
- return (void*) xglCreateComputePipeline;
- if (!strcmp(funcName, "xglCreateSampler"))
- return (void*) xglCreateSampler;
- if (!strcmp(funcName, "xglCreateDynamicViewportState"))
- return (void*) xglCreateDynamicViewportState;
- if (!strcmp(funcName, "xglCreateDynamicRasterState"))
- return (void*) xglCreateDynamicRasterState;
- if (!strcmp(funcName, "xglCreateDynamicColorBlendState"))
- return (void*) xglCreateDynamicColorBlendState;
- if (!strcmp(funcName, "xglCreateDynamicDepthStencilState"))
- return (void*) xglCreateDynamicDepthStencilState;
- if (!strcmp(funcName, "xglCreateCommandBuffer"))
- return (void*) xglCreateCommandBuffer;
- if (!strcmp(funcName, "xglBeginCommandBuffer"))
- return (void*) xglBeginCommandBuffer;
- if (!strcmp(funcName, "xglEndCommandBuffer"))
- return (void*) xglEndCommandBuffer;
- if (!strcmp(funcName, "xglResetCommandBuffer"))
- return (void*) xglResetCommandBuffer;
- if (!strcmp(funcName, "xglCmdBindPipeline"))
- return (void*) xglCmdBindPipeline;
- if (!strcmp(funcName, "xglCmdBindDynamicStateObject"))
- return (void*) xglCmdBindDynamicStateObject;
- if (!strcmp(funcName, "xglCmdBindDescriptorSets"))
- return (void*) xglCmdBindDescriptorSets;
- if (!strcmp(funcName, "xglCmdBindVertexBuffer"))
- return (void*) xglCmdBindVertexBuffer;
- if (!strcmp(funcName, "xglCmdBindIndexBuffer"))
- return (void*) xglCmdBindIndexBuffer;
- if (!strcmp(funcName, "xglCmdDrawIndirect"))
- return (void*) xglCmdDrawIndirect;
- if (!strcmp(funcName, "xglCmdDrawIndexedIndirect"))
- return (void*) xglCmdDrawIndexedIndirect;
- if (!strcmp(funcName, "xglCmdDispatchIndirect"))
- return (void*) xglCmdDispatchIndirect;
- if (!strcmp(funcName, "xglCmdCopyBuffer"))
- return (void*) xglCmdCopyBuffer;
- if (!strcmp(funcName, "xglCmdCopyImage"))
- return (void*) xglCmdCopyImage;
- if (!strcmp(funcName, "xglCmdCopyBufferToImage"))
- return (void*) xglCmdCopyBufferToImage;
- if (!strcmp(funcName, "xglCmdCopyImageToBuffer"))
- return (void*) xglCmdCopyImageToBuffer;
- if (!strcmp(funcName, "xglCmdCloneImageData"))
- return (void*) xglCmdCloneImageData;
- if (!strcmp(funcName, "xglCmdUpdateBuffer"))
- return (void*) xglCmdUpdateBuffer;
- if (!strcmp(funcName, "xglCmdFillBuffer"))
- return (void*) xglCmdFillBuffer;
- if (!strcmp(funcName, "xglCmdClearColorImage"))
- return (void*) xglCmdClearColorImage;
- if (!strcmp(funcName, "xglCmdClearDepthStencil"))
- return (void*) xglCmdClearDepthStencil;
- if (!strcmp(funcName, "xglCmdResolveImage"))
- return (void*) xglCmdResolveImage;
- if (!strcmp(funcName, "xglCmdBeginQuery"))
- return (void*) xglCmdBeginQuery;
- if (!strcmp(funcName, "xglCmdEndQuery"))
- return (void*) xglCmdEndQuery;
- if (!strcmp(funcName, "xglCmdResetQueryPool"))
- return (void*) xglCmdResetQueryPool;
- if (!strcmp(funcName, "xglDbgRegisterMsgCallback"))
- return (void*) xglDbgRegisterMsgCallback;
- if (!strcmp(funcName, "xglDbgUnregisterMsgCallback"))
- return (void*) xglDbgUnregisterMsgCallback;
- if (!strcmp(funcName, "xglGetDeviceQueue"))
- return (void*) xglGetDeviceQueue;
- if (!strcmp(funcName, "xglQueueAddMemReference"))
- return (void*) xglQueueAddMemReference;
- if (!strcmp(funcName, "xglQueueRemoveMemReference"))
- return (void*) xglQueueRemoveMemReference;
+ if (!strcmp(funcName, "vkGetProcAddr"))
+ return (void *) vkGetProcAddr;
+ if (!strcmp(funcName, "vkCreateDevice"))
+ return (void*) vkCreateDevice;
+ if (!strcmp(funcName, "vkDestroyDevice"))
+ return (void*) vkDestroyDevice;
+ if (!strcmp(funcName, "vkGetExtensionSupport"))
+ return (void*) vkGetExtensionSupport;
+ if (!strcmp(funcName, "vkEnumerateLayers"))
+ return (void*) vkEnumerateLayers;
+ if (!strcmp(funcName, "vkQueueSubmit"))
+ return (void*) vkQueueSubmit;
+ if (!strcmp(funcName, "vkAllocMemory"))
+ return (void*) vkAllocMemory;
+ if (!strcmp(funcName, "vkFreeMemory"))
+ return (void*) vkFreeMemory;
+ if (!strcmp(funcName, "vkSetMemoryPriority"))
+ return (void*) vkSetMemoryPriority;
+ if (!strcmp(funcName, "vkMapMemory"))
+ return (void*) vkMapMemory;
+ if (!strcmp(funcName, "vkUnmapMemory"))
+ return (void*) vkUnmapMemory;
+ if (!strcmp(funcName, "vkPinSystemMemory"))
+ return (void*) vkPinSystemMemory;
+ if (!strcmp(funcName, "vkOpenSharedMemory"))
+ return (void*) vkOpenSharedMemory;
+ if (!strcmp(funcName, "vkOpenPeerMemory"))
+ return (void*) vkOpenPeerMemory;
+ if (!strcmp(funcName, "vkOpenPeerImage"))
+ return (void*) vkOpenPeerImage;
+ if (!strcmp(funcName, "vkDestroyObject"))
+ return (void*) vkDestroyObject;
+ if (!strcmp(funcName, "vkGetObjectInfo"))
+ return (void*) vkGetObjectInfo;
+ if (!strcmp(funcName, "vkBindObjectMemory"))
+ return (void*) vkBindObjectMemory;
+ if (!strcmp(funcName, "vkCreateFence"))
+ return (void*) vkCreateFence;
+ if (!strcmp(funcName, "vkGetFenceStatus"))
+ return (void*) vkGetFenceStatus;
+ if (!strcmp(funcName, "vkResetFences"))
+ return (void*) vkResetFences;
+ if (!strcmp(funcName, "vkWaitForFences"))
+ return (void*) vkWaitForFences;
+ if (!strcmp(funcName, "vkQueueWaitIdle"))
+ return (void*) vkQueueWaitIdle;
+ if (!strcmp(funcName, "vkDeviceWaitIdle"))
+ return (void*) vkDeviceWaitIdle;
+ if (!strcmp(funcName, "vkCreateEvent"))
+ return (void*) vkCreateEvent;
+ if (!strcmp(funcName, "vkCreateQueryPool"))
+ return (void*) vkCreateQueryPool;
+ if (!strcmp(funcName, "vkCreateBuffer"))
+ return (void*) vkCreateBuffer;
+ if (!strcmp(funcName, "vkCreateBufferView"))
+ return (void*) vkCreateBufferView;
+ if (!strcmp(funcName, "vkCreateImage"))
+ return (void*) vkCreateImage;
+ if (!strcmp(funcName, "vkCreateImageView"))
+ return (void*) vkCreateImageView;
+ if (!strcmp(funcName, "vkCreateColorAttachmentView"))
+ return (void*) vkCreateColorAttachmentView;
+ if (!strcmp(funcName, "vkCreateDepthStencilView"))
+ return (void*) vkCreateDepthStencilView;
+ if (!strcmp(funcName, "vkCreateShader"))
+ return (void*) vkCreateShader;
+ if (!strcmp(funcName, "vkCreateGraphicsPipeline"))
+ return (void*) vkCreateGraphicsPipeline;
+ if (!strcmp(funcName, "vkCreateGraphicsPipelineDerivative"))
+ return (void*) vkCreateGraphicsPipelineDerivative;
+ if (!strcmp(funcName, "vkCreateComputePipeline"))
+ return (void*) vkCreateComputePipeline;
+ if (!strcmp(funcName, "vkCreateSampler"))
+ return (void*) vkCreateSampler;
+ if (!strcmp(funcName, "vkCreateDynamicViewportState"))
+ return (void*) vkCreateDynamicViewportState;
+ if (!strcmp(funcName, "vkCreateDynamicRasterState"))
+ return (void*) vkCreateDynamicRasterState;
+ if (!strcmp(funcName, "vkCreateDynamicColorBlendState"))
+ return (void*) vkCreateDynamicColorBlendState;
+ if (!strcmp(funcName, "vkCreateDynamicDepthStencilState"))
+ return (void*) vkCreateDynamicDepthStencilState;
+ if (!strcmp(funcName, "vkCreateCommandBuffer"))
+ return (void*) vkCreateCommandBuffer;
+ if (!strcmp(funcName, "vkBeginCommandBuffer"))
+ return (void*) vkBeginCommandBuffer;
+ if (!strcmp(funcName, "vkEndCommandBuffer"))
+ return (void*) vkEndCommandBuffer;
+ if (!strcmp(funcName, "vkResetCommandBuffer"))
+ return (void*) vkResetCommandBuffer;
+ if (!strcmp(funcName, "vkCmdBindPipeline"))
+ return (void*) vkCmdBindPipeline;
+ if (!strcmp(funcName, "vkCmdBindDynamicStateObject"))
+ return (void*) vkCmdBindDynamicStateObject;
+ if (!strcmp(funcName, "vkCmdBindDescriptorSets"))
+ return (void*) vkCmdBindDescriptorSets;
+ if (!strcmp(funcName, "vkCmdBindVertexBuffer"))
+ return (void*) vkCmdBindVertexBuffer;
+ if (!strcmp(funcName, "vkCmdBindIndexBuffer"))
+ return (void*) vkCmdBindIndexBuffer;
+ if (!strcmp(funcName, "vkCmdDrawIndirect"))
+ return (void*) vkCmdDrawIndirect;
+ if (!strcmp(funcName, "vkCmdDrawIndexedIndirect"))
+ return (void*) vkCmdDrawIndexedIndirect;
+ if (!strcmp(funcName, "vkCmdDispatchIndirect"))
+ return (void*) vkCmdDispatchIndirect;
+ if (!strcmp(funcName, "vkCmdCopyBuffer"))
+ return (void*) vkCmdCopyBuffer;
+ if (!strcmp(funcName, "vkCmdCopyImage"))
+ return (void*) vkCmdCopyImage;
+ if (!strcmp(funcName, "vkCmdCopyBufferToImage"))
+ return (void*) vkCmdCopyBufferToImage;
+ if (!strcmp(funcName, "vkCmdCopyImageToBuffer"))
+ return (void*) vkCmdCopyImageToBuffer;
+ if (!strcmp(funcName, "vkCmdCloneImageData"))
+ return (void*) vkCmdCloneImageData;
+ if (!strcmp(funcName, "vkCmdUpdateBuffer"))
+ return (void*) vkCmdUpdateBuffer;
+ if (!strcmp(funcName, "vkCmdFillBuffer"))
+ return (void*) vkCmdFillBuffer;
+ if (!strcmp(funcName, "vkCmdClearColorImage"))
+ return (void*) vkCmdClearColorImage;
+ if (!strcmp(funcName, "vkCmdClearDepthStencil"))
+ return (void*) vkCmdClearDepthStencil;
+ if (!strcmp(funcName, "vkCmdResolveImage"))
+ return (void*) vkCmdResolveImage;
+ if (!strcmp(funcName, "vkCmdBeginQuery"))
+ return (void*) vkCmdBeginQuery;
+ if (!strcmp(funcName, "vkCmdEndQuery"))
+ return (void*) vkCmdEndQuery;
+ if (!strcmp(funcName, "vkCmdResetQueryPool"))
+ return (void*) vkCmdResetQueryPool;
+ if (!strcmp(funcName, "vkDbgRegisterMsgCallback"))
+ return (void*) vkDbgRegisterMsgCallback;
+ if (!strcmp(funcName, "vkDbgUnregisterMsgCallback"))
+ return (void*) vkDbgUnregisterMsgCallback;
+ if (!strcmp(funcName, "vkGetDeviceQueue"))
+ return (void*) vkGetDeviceQueue;
+ if (!strcmp(funcName, "vkQueueAddMemReference"))
+ return (void*) vkQueueAddMemReference;
+ if (!strcmp(funcName, "vkQueueRemoveMemReference"))
+ return (void*) vkQueueRemoveMemReference;
#if !defined(WIN32)
- if (!strcmp(funcName, "xglWsiX11CreatePresentableImage"))
- return (void*) xglWsiX11CreatePresentableImage;
- if (!strcmp(funcName, "xglWsiX11QueuePresent"))
- return (void*) xglWsiX11QueuePresent;
+ if (!strcmp(funcName, "vkWsiX11CreatePresentableImage"))
+ return (void*) vkWsiX11CreatePresentableImage;
+ if (!strcmp(funcName, "vkWsiX11QueuePresent"))
+ return (void*) vkWsiX11QueuePresent;
#endif
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
*
* DEALINGS IN THE SOFTWARE.
*/
#pragma once
-#include "xglLayer.h"
+#include "vkLayer.h"
#ifdef __cplusplus
extern "C" {
MEMTRACK_FREED_MEM_REF = 6, // MEM Obj freed while it still has obj and/or CB refs
MEMTRACK_MEM_OBJ_CLEAR_EMPTY_BINDINGS = 7, // Clearing bindings on mem obj that doesn't have any bindings
MEMTRACK_MISSING_MEM_BINDINGS = 8, // Trying to retrieve mem bindings, but none found (may be internal error)
- MEMTRACK_INVALID_OBJECT = 9, // Attempting to reference generic XGL Object that is invalid
- MEMTRACK_FREE_MEM_ERROR = 10, // Error while calling xglFreeMemory
+ MEMTRACK_INVALID_OBJECT = 9, // Attempting to reference generic VK Object that is invalid
+ MEMTRACK_FREE_MEM_ERROR = 10, // Error while calling vkFreeMemory
MEMTRACK_DESTROY_OBJECT_ERROR = 11, // Destroying an object that has a memory reference
MEMTRACK_MEMORY_BINDING_ERROR = 12, // Error during one of many calls that bind memory to object or CB
MEMTRACK_OUT_OF_MEMORY_ERROR = 13, // malloc failed
- MEMTRACK_MEMORY_LEAK = 14, // Failure to call xglFreeMemory on Mem Obj prior to DestroyDevice
+ MEMTRACK_MEMORY_LEAK = 14, // Failure to call vkFreeMemory on Mem Obj prior to DestroyDevice
MEMTRACK_INVALID_STATE = 15, // Memory not in the correct state
- MEMTRACK_RESET_CB_WHILE_IN_FLIGHT = 16, // xglResetCommandBuffer() called on a CB that hasn't completed
+ MEMTRACK_RESET_CB_WHILE_IN_FLIGHT = 16, // vkResetCommandBuffer() called on a CB that hasn't completed
MEMTRACK_INVALID_QUEUE = 17, // Invalid queue requested or selected
MEMTRACK_INVALID_FENCE_STATE = 18, // Invalid Fence State signaled or used
} MEM_TRACK_ERROR;
* memObjMap -- map of Memory Objects to MT_MEM_OBJ_INFO structures
* Each MT_MEM_OBJ_INFO has two stl list containers with:
* -- all CBs referencing this mem obj
- * -- all XGL Objects that are bound to this memory
+ * -- all VK Objects that are bound to this memory
* objectMap -- map of objects to MT_OBJ_INFO structures
*
* Algorithm overview
// Data struct for tracking memory object
struct MT_MEM_OBJ_INFO {
uint32_t refCount; // Count of references (obj bindings or CB use)
- XGL_GPU_MEMORY mem;
- XGL_MEMORY_ALLOC_INFO allocInfo;
- list<XGL_OBJECT> pObjBindings; // list container of objects bound to this memory
- list<XGL_CMD_BUFFER> pCmdBufferBindings; // list container of cmd buffers that reference this mem object
+ VK_GPU_MEMORY mem;
+ VK_MEMORY_ALLOC_INFO allocInfo;
+ list<VK_OBJECT> pObjBindings; // list container of objects bound to this memory
+ list<VK_CMD_BUFFER> pCmdBufferBindings; // list container of cmd buffers that reference this mem object
};
struct MT_OBJ_INFO {
MT_MEM_OBJ_INFO* pMemObjInfo;
- XGL_OBJECT object;
- XGL_STRUCTURE_TYPE sType;
+ VK_OBJECT object;
+ VK_STRUCTURE_TYPE sType;
uint32_t ref_count;
// Capture all object types that may have memory bound. From prog guide:
// The only objects that are guaranteed to have no external memory
// requirements are devices, queues, command buffers, shaders and memory objects.
union {
- XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view_create_info;
- XGL_DEPTH_STENCIL_VIEW_CREATE_INFO ds_view_create_info;
- XGL_IMAGE_VIEW_CREATE_INFO image_view_create_info;
- XGL_IMAGE_CREATE_INFO image_create_info;
- XGL_GRAPHICS_PIPELINE_CREATE_INFO graphics_pipeline_create_info;
- XGL_COMPUTE_PIPELINE_CREATE_INFO compute_pipeline_create_info;
- XGL_SAMPLER_CREATE_INFO sampler_create_info;
- XGL_FENCE_CREATE_INFO fence_create_info;
+ VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO color_attachment_view_create_info;
+ VK_DEPTH_STENCIL_VIEW_CREATE_INFO ds_view_create_info;
+ VK_IMAGE_VIEW_CREATE_INFO image_view_create_info;
+ VK_IMAGE_CREATE_INFO image_create_info;
+ VK_GRAPHICS_PIPELINE_CREATE_INFO graphics_pipeline_create_info;
+ VK_COMPUTE_PIPELINE_CREATE_INFO compute_pipeline_create_info;
+ VK_SAMPLER_CREATE_INFO sampler_create_info;
+ VK_FENCE_CREATE_INFO fence_create_info;
#ifndef _WIN32
- XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO wsi_x11_presentable_image_create_info;
+ VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO wsi_x11_presentable_image_create_info;
#endif // _WIN32
} create_info;
char object_name[64];
// Track all command buffers
struct MT_CB_INFO {
- XGL_CMD_BUFFER_CREATE_INFO createInfo;
- MT_OBJ_INFO* pDynamicState[XGL_NUM_STATE_BIND_POINT];
- XGL_PIPELINE pipelines[XGL_NUM_PIPELINE_BIND_POINT];
+ VK_CMD_BUFFER_CREATE_INFO createInfo;
+ MT_OBJ_INFO* pDynamicState[VK_NUM_STATE_BIND_POINT];
+ VK_PIPELINE pipelines[VK_NUM_PIPELINE_BIND_POINT];
uint32_t colorAttachmentCount;
- XGL_DEPTH_STENCIL_BIND_INFO dsBindInfo;
- XGL_CMD_BUFFER cmdBuffer;
+ VK_DEPTH_STENCIL_BIND_INFO dsBindInfo;
+ VK_CMD_BUFFER cmdBuffer;
uint64_t fenceId;
// Order dependent, stl containers must be at end of struct
- list<XGL_GPU_MEMORY> pMemObjList; // List container of Mem objs referenced by this CB
+ list<VK_GPU_MEMORY> pMemObjList; // List container of Mem objs referenced by this CB
};
// Associate fenceId with a fence object
struct MT_FENCE_INFO {
- XGL_FENCE fence; // Handle to fence object
- XGL_QUEUE queue; // Queue that this fence is submitted against
+ VK_FENCE fence; // Handle to fence object
+ VK_QUEUE queue; // Queue that this fence is submitted against
bool32_t localFence; // Is fence created by layer?
};
struct MT_QUEUE_INFO {
uint64_t lastRetiredId;
uint64_t lastSubmittedId;
- list<XGL_CMD_BUFFER> pQueueCmdBuffers;
- list<XGL_GPU_MEMORY> pMemRefList;
+ list<VK_CMD_BUFFER> pQueueCmdBuffers;
+ list<VK_GPU_MEMORY> pMemRefList;
};
#ifdef __cplusplus
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <assert.h>
#include <unordered_map>
#include "loader_platform.h"
-#include "xgl_dispatch_table_helper.h"
-#include "xglLayer.h"
+#include "vk_dispatch_table_helper.h"
+#include "vkLayer.h"
// The following is #included again to catch certain OS-specific functions
// being used:
#include "loader_platform.h"
-static void initLayerTable(const XGL_BASE_LAYER_OBJECT *gpuw, XGL_LAYER_DISPATCH_TABLE *pTable, const unsigned int layerNum);
+static void initLayerTable(const VK_BASE_LAYER_OBJECT *gpuw, VK_LAYER_DISPATCH_TABLE *pTable, const unsigned int layerNum);
/******************************** Layer multi1 functions **************************/
-static std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *> tableMap1;
+static std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *> tableMap1;
static bool layer1_first_activated = false;
-static XGL_LAYER_DISPATCH_TABLE * getLayer1Table(const XGL_BASE_LAYER_OBJECT *gpuw)
+static VK_LAYER_DISPATCH_TABLE * getLayer1Table(const VK_BASE_LAYER_OBJECT *gpuw)
{
- XGL_LAYER_DISPATCH_TABLE *pTable;
+ VK_LAYER_DISPATCH_TABLE *pTable;
assert(gpuw);
- std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap1.find((void *) gpuw);
+ std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap1.find((void *) gpuw);
if (it == tableMap1.end())
{
- pTable = new XGL_LAYER_DISPATCH_TABLE;
+ pTable = new VK_LAYER_DISPATCH_TABLE;
tableMap1[(void *) gpuw] = pTable;
initLayerTable(gpuw, pTable, 1);
return pTable;
#endif
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi1CreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi1CreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = getLayer1Table(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = getLayer1Table(gpuw);
- printf("At start of multi1 layer xglCreateDevice()\n");
- XGL_RESULT result = pTable->CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ printf("At start of multi1 layer vkCreateDevice()\n");
+ VK_RESULT result = pTable->CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
// create a mapping for the device object into the dispatch table
tableMap1.emplace(*pDevice, pTable);
- printf("Completed multi1 layer xglCreateDevice()\n");
+ printf("Completed multi1 layer vkCreateDevice()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi1CreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
- XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi1CreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo,
+ VK_PIPELINE* pPipeline)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap1[device];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap1[device];
- printf("At start of multi1 layer xglCreateGraphicsPipeline()\n");
- XGL_RESULT result = pTable->CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ printf("At start of multi1 layer vkCreateGraphicsPipeline()\n");
+ VK_RESULT result = pTable->CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
// create a mapping for the pipeline object into the dispatch table
tableMap1.emplace(*pPipeline, pTable);
- printf("Completed multi1 layer xglCreateGraphicsPipeline()\n");
+ printf("Completed multi1 layer vkCreateGraphicsPipeline()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi1StorePipeline(XGL_PIPELINE pipeline, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi1StorePipeline(VK_PIPELINE pipeline, size_t* pDataSize, void* pData)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap1[pipeline];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap1[pipeline];
- printf("At start of multi1 layer xglStorePipeline()\n");
- XGL_RESULT result = pTable->StorePipeline(pipeline, pDataSize, pData);
- printf("Completed multi1 layer xglStorePipeline()\n");
+ printf("At start of multi1 layer vkStorePipeline()\n");
+ VK_RESULT result = pTable->StorePipeline(pipeline, pDataSize, pData);
+ printf("Completed multi1 layer vkStorePipeline()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi1EnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
+VK_LAYER_EXPORT VK_RESULT VKAPI multi1EnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
size_t* pOutLayerCount, char* const* pOutLayers,
void* pReserved)
{
if (gpu == NULL)
- return xglEnumerateLayers(gpu, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ return vkEnumerateLayers(gpu, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = getLayer1Table(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = getLayer1Table(gpuw);
- printf("At start of multi1 layer xglEnumerateLayers()\n");
- XGL_RESULT result = pTable->EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
- printf("Completed multi1 layer xglEnumerateLayers()\n");
+ printf("At start of multi1 layer vkEnumerateLayers()\n");
+ VK_RESULT result = pTable->EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ printf("Completed multi1 layer vkEnumerateLayers()\n");
return result;
}
-XGL_LAYER_EXPORT void * XGLAPI multi1GetProcAddr(XGL_PHYSICAL_GPU gpu, const char* pName)
+VK_LAYER_EXPORT void * VKAPI multi1GetProcAddr(VK_PHYSICAL_GPU gpu, const char* pName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpu == NULL)
return NULL;
getLayer1Table(gpuw);
- if (!strncmp("xglCreateDevice", pName, sizeof ("xglCreateDevice")))
+ if (!strncmp("vkCreateDevice", pName, sizeof ("vkCreateDevice")))
return (void *) multi1CreateDevice;
- else if (!strncmp("xglEnumerateLayers", pName, sizeof ("xglEnumerateLayers")))
+ else if (!strncmp("vkEnumerateLayers", pName, sizeof ("vkEnumerateLayers")))
return (void *) multi1EnumerateLayers;
- else if (!strncmp("xglCreateGraphicsPipeline", pName, sizeof ("xglCreateGraphicsPipeline")))
+ else if (!strncmp("vkCreateGraphicsPipeline", pName, sizeof ("vkCreateGraphicsPipeline")))
return (void *) multi1CreateGraphicsPipeline;
- else if (!strncmp("xglStorePipeline", pName, sizeof ("xglStorePipeline")))
+ else if (!strncmp("vkStorePipeline", pName, sizeof ("vkStorePipeline")))
return (void *) multi1StorePipeline;
- else if (!strncmp("xglGetExtensionSupport", pName, sizeof ("xglGetExtensionSupport")))
- return (void *) xglGetExtensionSupport;
+ else if (!strncmp("vkGetExtensionSupport", pName, sizeof ("vkGetExtensionSupport")))
+ return (void *) vkGetExtensionSupport;
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU) gpuw->nextObject, pName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU) gpuw->nextObject, pName);
}
}
/******************************** Layer multi2 functions **************************/
-static std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *> tableMap2;
+static std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *> tableMap2;
static bool layer2_first_activated = false;
-static XGL_LAYER_DISPATCH_TABLE * getLayer2Table(const XGL_BASE_LAYER_OBJECT *gpuw)
+static VK_LAYER_DISPATCH_TABLE * getLayer2Table(const VK_BASE_LAYER_OBJECT *gpuw)
{
- XGL_LAYER_DISPATCH_TABLE *pTable;
+ VK_LAYER_DISPATCH_TABLE *pTable;
assert(gpuw);
- std::unordered_map<void *, XGL_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap2.find((void *) gpuw);
+ std::unordered_map<void *, VK_LAYER_DISPATCH_TABLE *>::const_iterator it = tableMap2.find((void *) gpuw);
if (it == tableMap2.end())
{
- pTable = new XGL_LAYER_DISPATCH_TABLE;
+ pTable = new VK_LAYER_DISPATCH_TABLE;
tableMap2[(void *) gpuw] = pTable;
initLayerTable(gpuw, pTable, 2);
return pTable;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi2CreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo,
- XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi2CreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo,
+ VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = getLayer2Table(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = getLayer2Table(gpuw);
- printf("At start of multi2 xglCreateDevice()\n");
- XGL_RESULT result = pTable->CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ printf("At start of multi2 vkCreateDevice()\n");
+ VK_RESULT result = pTable->CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
// create a mapping for the device object into the dispatch table for layer2
tableMap2.emplace(*pDevice, pTable);
- printf("Completed multi2 layer xglCreateDevice()\n");
+ printf("Completed multi2 layer vkCreateDevice()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi2CreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo,
- XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi2CreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo,
+ VK_CMD_BUFFER* pCmdBuffer)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap2[device];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap2[device];
- printf("At start of multi2 layer xglCreateCommandBuffer()\n");
- XGL_RESULT result = pTable->CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ printf("At start of multi2 layer vkCreateCommandBuffer()\n");
+ VK_RESULT result = pTable->CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
// create a mapping for CmdBuffer object into the dispatch table for layer 2
tableMap2.emplace(*pCmdBuffer, pTable);
- printf("Completed multi2 layer xglCreateCommandBuffer()\n");
+ printf("Completed multi2 layer vkCreateCommandBuffer()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi2BeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI multi2BeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap2[cmdBuffer];
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap2[cmdBuffer];
- printf("At start of multi2 layer xglBeginCommandBuffer()\n");
- XGL_RESULT result = pTable->BeginCommandBuffer(cmdBuffer, pBeginInfo);
- printf("Completed multi2 layer xglBeginCommandBuffer()\n");
+ printf("At start of multi2 layer vkBeginCommandBuffer()\n");
+ VK_RESULT result = pTable->BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ printf("Completed multi2 layer vkBeginCommandBuffer()\n");
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI multi2EnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
+VK_LAYER_EXPORT VK_RESULT VKAPI multi2EnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
size_t* pOutLayerCount, char* const* pOutLayers,
void* pReserved)
{
if (gpu == NULL)
- return xglEnumerateLayers(gpu, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ return vkEnumerateLayers(gpu, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE* pTable = getLayer2Table(gpuw);
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE* pTable = getLayer2Table(gpuw);
- printf("At start of multi2 layer xglEnumerateLayers()\n");
- XGL_RESULT result = pTable->EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
- printf("Completed multi2 layer xglEnumerateLayers()\n");
+ printf("At start of multi2 layer vkEnumerateLayers()\n");
+ VK_RESULT result = pTable->EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ printf("Completed multi2 layer vkEnumerateLayers()\n");
return result;
}
-XGL_LAYER_EXPORT void * XGLAPI multi2GetProcAddr(XGL_PHYSICAL_GPU gpu, const char* pName)
+VK_LAYER_EXPORT void * VKAPI multi2GetProcAddr(VK_PHYSICAL_GPU gpu, const char* pName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
if (gpu == NULL)
return NULL;
getLayer2Table(gpuw);
- if (!strncmp("xglCreateDevice", pName, sizeof ("xglCreateDevice")))
+ if (!strncmp("vkCreateDevice", pName, sizeof ("vkCreateDevice")))
return (void *) multi2CreateDevice;
- else if (!strncmp("xglEnumerateLayers", pName, sizeof ("xglEnumerateLayers")))
+ else if (!strncmp("vkEnumerateLayers", pName, sizeof ("vkEnumerateLayers")))
return (void *) multi2EnumerateLayers;
- else if (!strncmp("xglCreateCommandBuffer", pName, sizeof ("xglCreateCommandBuffer")))
+ else if (!strncmp("vkCreateCommandBuffer", pName, sizeof ("vkCreateCommandBuffer")))
return (void *) multi2CreateCommandBuffer;
- else if (!strncmp("xglBeginCommandBuffer", pName, sizeof ("xglBeginCommandBuffer")))
+ else if (!strncmp("vkBeginCommandBuffer", pName, sizeof ("vkBeginCommandBuffer")))
return (void *) multi2BeginCommandBuffer;
- else if (!strncmp("xglGetExtensionSupport", pName, sizeof ("xglGetExtensionSupport")))
- return (void *) xglGetExtensionSupport;
+ else if (!strncmp("vkGetExtensionSupport", pName, sizeof ("vkGetExtensionSupport")))
+ return (void *) vkGetExtensionSupport;
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU) gpuw->nextObject, pName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU) gpuw->nextObject, pName);
}
}
/********************************* Common functions ********************************/
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize,
size_t* pOutLayerCount, char* const* pOutLayers,
void* pReserved)
{
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL || pOutLayers[1] == NULL || pReserved == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
if (maxLayerCount < 2)
- return XGL_ERROR_INITIALIZATION_FAILED;
+ return VK_ERROR_INITIALIZATION_FAILED;
*pOutLayerCount = 2;
strncpy((char *) pOutLayers[0], "multi1", maxStringSize);
strncpy((char *) pOutLayers[1], "multi2", maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_RESULT result;
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_RESULT result;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
/* This entrypoint is NOT going to init it's own dispatch table since loader calls here early */
if (!strncmp(pExtName, "multi1", strlen("multi1")))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (!strncmp(pExtName, "multi2", strlen("multi2")))
{
- result = XGL_SUCCESS;
+ result = VK_SUCCESS;
} else if (!tableMap1.empty() && (tableMap1.find(gpuw) != tableMap1.end()))
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap1[gpuw];
- result = pTable->GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap1[gpuw];
+ result = pTable->GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
} else if (!tableMap2.empty() && (tableMap2.find(gpuw) != tableMap2.end()))
{
- XGL_LAYER_DISPATCH_TABLE* pTable = tableMap2[gpuw];
- result = pTable->GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ VK_LAYER_DISPATCH_TABLE* pTable = tableMap2[gpuw];
+ result = pTable->GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
} else
{
- result = XGL_ERROR_INVALID_EXTENSION;
+ result = VK_ERROR_INVALID_EXTENSION;
}
return result;
}
-XGL_LAYER_EXPORT void * XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* pName)
+VK_LAYER_EXPORT void * VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* pName)
{
// to find each layers GPA routine Loader will search via "<layerName>GetProcAddr"
if (!strncmp("multi1GetProcAddr", pName, sizeof("multi1GetProcAddr")))
return (void *) multi1GetProcAddr;
else if (!strncmp("multi2GetProcAddr", pName, sizeof("multi2GetProcAddr")))
return (void *) multi2GetProcAddr;
- else if (!strncmp("xglGetProcAddr", pName, sizeof("xglGetProcAddr")))
- return (void *) xglGetProcAddr;
+ else if (!strncmp("vkGetProcAddr", pName, sizeof("vkGetProcAddr")))
+ return (void *) vkGetProcAddr;
// use first layer activated as GPA dispatch table activation happens in order
else if (layer1_first_activated)
} //extern "C"
#endif
-static void initLayerTable(const XGL_BASE_LAYER_OBJECT *gpuw, XGL_LAYER_DISPATCH_TABLE *pTable, const unsigned int layerNum)
+static void initLayerTable(const VK_BASE_LAYER_OBJECT *gpuw, VK_LAYER_DISPATCH_TABLE *pTable, const unsigned int layerNum)
{
if (layerNum == 2 && layer1_first_activated == false)
layer2_first_activated = true;
if (layerNum == 1 && layer2_first_activated == false)
layer1_first_activated = true;
- layer_initialize_dispatch_table(pTable, gpuw->pGPA, (XGL_PHYSICAL_GPU) gpuw->nextObject);
+ layer_initialize_dispatch_table(pTable, gpuw->pGPA, (VK_PHYSICAL_GPU) gpuw->nextObject);
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
* DEALINGS IN THE SOFTWARE.
*/
-#include "xglLayer.h"
+#include "vkLayer.h"
// Object Tracker ERROR codes
typedef enum _OBJECT_TRACK_ERROR
{
} OBJECT_STATUS;
// TODO : Make this code-generated
// Object type enum
-typedef enum _XGL_OBJECT_TYPE
+typedef enum _VK_OBJECT_TYPE
{
- XGL_OBJECT_TYPE_SAMPLER,
- XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET,
- XGL_OBJECT_TYPE_DESCRIPTOR_POOL,
- XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT,
- XGL_OBJECT_TYPE_IMAGE_VIEW,
- XGL_OBJECT_TYPE_SEMAPHORE,
- XGL_OBJECT_TYPE_SHADER,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
- XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_CHAIN,
- XGL_OBJECT_TYPE_BUFFER,
- XGL_OBJECT_TYPE_PIPELINE,
- XGL_OBJECT_TYPE_DEVICE,
- XGL_OBJECT_TYPE_QUERY_POOL,
- XGL_OBJECT_TYPE_EVENT,
- XGL_OBJECT_TYPE_QUEUE,
- XGL_OBJECT_TYPE_PHYSICAL_GPU,
- XGL_OBJECT_TYPE_RENDER_PASS,
- XGL_OBJECT_TYPE_FRAMEBUFFER,
- XGL_OBJECT_TYPE_IMAGE,
- XGL_OBJECT_TYPE_BUFFER_VIEW,
- XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW,
- XGL_OBJECT_TYPE_INSTANCE,
- XGL_OBJECT_TYPE_PIPELINE_DELTA,
- XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT,
- XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW,
- XGL_OBJECT_TYPE_GPU_MEMORY,
- XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT,
- XGL_OBJECT_TYPE_FENCE,
- XGL_OBJECT_TYPE_CMD_BUFFER,
- XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY,
+ VK_OBJECT_TYPE_SAMPLER,
+ VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET,
+ VK_OBJECT_TYPE_DESCRIPTOR_POOL,
+ VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT,
+ VK_OBJECT_TYPE_IMAGE_VIEW,
+ VK_OBJECT_TYPE_SEMAPHORE,
+ VK_OBJECT_TYPE_SHADER,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT,
+ VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_CHAIN,
+ VK_OBJECT_TYPE_BUFFER,
+ VK_OBJECT_TYPE_PIPELINE,
+ VK_OBJECT_TYPE_DEVICE,
+ VK_OBJECT_TYPE_QUERY_POOL,
+ VK_OBJECT_TYPE_EVENT,
+ VK_OBJECT_TYPE_QUEUE,
+ VK_OBJECT_TYPE_PHYSICAL_GPU,
+ VK_OBJECT_TYPE_RENDER_PASS,
+ VK_OBJECT_TYPE_FRAMEBUFFER,
+ VK_OBJECT_TYPE_IMAGE,
+ VK_OBJECT_TYPE_BUFFER_VIEW,
+ VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW,
+ VK_OBJECT_TYPE_INSTANCE,
+ VK_OBJECT_TYPE_PIPELINE_DELTA,
+ VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT,
+ VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW,
+ VK_OBJECT_TYPE_GPU_MEMORY,
+ VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT,
+ VK_OBJECT_TYPE_FENCE,
+ VK_OBJECT_TYPE_CMD_BUFFER,
+ VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY,
- XGL_OBJECT_TYPE_UNKNOWN,
- XGL_NUM_OBJECT_TYPE,
- XGL_OBJECT_TYPE_ANY, // Allow global object list to be queried/retrieved
-} XGL_OBJECT_TYPE;
+ VK_OBJECT_TYPE_UNKNOWN,
+ VK_NUM_OBJECT_TYPE,
+ VK_OBJECT_TYPE_ANY, // Allow global object list to be queried/retrieved
+} VK_OBJECT_TYPE;
-static const char* string_XGL_OBJECT_TYPE(XGL_OBJECT_TYPE type) {
+static const char* string_VK_OBJECT_TYPE(VK_OBJECT_TYPE type) {
switch (type)
{
- case XGL_OBJECT_TYPE_DEVICE:
+ case VK_OBJECT_TYPE_DEVICE:
return "DEVICE";
- case XGL_OBJECT_TYPE_PIPELINE:
+ case VK_OBJECT_TYPE_PIPELINE:
return "PIPELINE";
- case XGL_OBJECT_TYPE_FENCE:
+ case VK_OBJECT_TYPE_FENCE:
return "FENCE";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT:
return "DESCRIPTOR_SET_LAYOUT";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_CHAIN:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET_LAYOUT_CHAIN:
return "DESCRIPTOR_SET_LAYOUT_CHAIN";
- case XGL_OBJECT_TYPE_GPU_MEMORY:
+ case VK_OBJECT_TYPE_GPU_MEMORY:
return "GPU_MEMORY";
- case XGL_OBJECT_TYPE_QUEUE:
+ case VK_OBJECT_TYPE_QUEUE:
return "QUEUE";
- case XGL_OBJECT_TYPE_IMAGE:
+ case VK_OBJECT_TYPE_IMAGE:
return "IMAGE";
- case XGL_OBJECT_TYPE_CMD_BUFFER:
+ case VK_OBJECT_TYPE_CMD_BUFFER:
return "CMD_BUFFER";
- case XGL_OBJECT_TYPE_SEMAPHORE:
+ case VK_OBJECT_TYPE_SEMAPHORE:
return "SEMAPHORE";
- case XGL_OBJECT_TYPE_FRAMEBUFFER:
+ case VK_OBJECT_TYPE_FRAMEBUFFER:
return "FRAMEBUFFER";
- case XGL_OBJECT_TYPE_SAMPLER:
+ case VK_OBJECT_TYPE_SAMPLER:
return "SAMPLER";
- case XGL_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW:
+ case VK_OBJECT_TYPE_COLOR_ATTACHMENT_VIEW:
return "COLOR_ATTACHMENT_VIEW";
- case XGL_OBJECT_TYPE_BUFFER_VIEW:
+ case VK_OBJECT_TYPE_BUFFER_VIEW:
return "BUFFER_VIEW";
- case XGL_OBJECT_TYPE_DESCRIPTOR_SET:
+ case VK_OBJECT_TYPE_DESCRIPTOR_SET:
return "DESCRIPTOR_SET";
- case XGL_OBJECT_TYPE_PHYSICAL_GPU:
+ case VK_OBJECT_TYPE_PHYSICAL_GPU:
return "PHYSICAL_GPU";
- case XGL_OBJECT_TYPE_IMAGE_VIEW:
+ case VK_OBJECT_TYPE_IMAGE_VIEW:
return "IMAGE_VIEW";
- case XGL_OBJECT_TYPE_BUFFER:
+ case VK_OBJECT_TYPE_BUFFER:
return "BUFFER";
- case XGL_OBJECT_TYPE_PIPELINE_DELTA:
+ case VK_OBJECT_TYPE_PIPELINE_DELTA:
return "PIPELINE_DELTA";
- case XGL_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_RS_STATE_OBJECT:
return "DYNAMIC_RS_STATE_OBJECT";
- case XGL_OBJECT_TYPE_EVENT:
+ case VK_OBJECT_TYPE_EVENT:
return "EVENT";
- case XGL_OBJECT_TYPE_DEPTH_STENCIL_VIEW:
+ case VK_OBJECT_TYPE_DEPTH_STENCIL_VIEW:
return "DEPTH_STENCIL_VIEW";
- case XGL_OBJECT_TYPE_SHADER:
+ case VK_OBJECT_TYPE_SHADER:
return "SHADER";
- case XGL_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_DS_STATE_OBJECT:
return "DYNAMIC_DS_STATE_OBJECT";
- case XGL_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_VP_STATE_OBJECT:
return "DYNAMIC_VP_STATE_OBJECT";
- case XGL_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT:
+ case VK_OBJECT_TYPE_DYNAMIC_CB_STATE_OBJECT:
return "DYNAMIC_CB_STATE_OBJECT";
- case XGL_OBJECT_TYPE_INSTANCE:
+ case VK_OBJECT_TYPE_INSTANCE:
return "INSTANCE";
- case XGL_OBJECT_TYPE_RENDER_PASS:
+ case VK_OBJECT_TYPE_RENDER_PASS:
return "RENDER_PASS";
- case XGL_OBJECT_TYPE_QUERY_POOL:
+ case VK_OBJECT_TYPE_QUERY_POOL:
return "QUERY_POOL";
- case XGL_OBJECT_TYPE_DESCRIPTOR_POOL:
+ case VK_OBJECT_TYPE_DESCRIPTOR_POOL:
return "DESCRIPTOR_POOL";
- case XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY:
+ case VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY:
return "PRESENTABLE_IMAGE_MEMORY";
default:
return "UNKNOWN";
typedef struct _OBJTRACK_NODE {
void *pObj;
- XGL_OBJECT_TYPE objType;
+ VK_OBJECT_TYPE objType;
uint64_t numUses;
OBJECT_STATUS status;
} OBJTRACK_NODE;
// prototype for extension functions
-uint64_t objTrackGetObjectCount(XGL_OBJECT_TYPE type);
-XGL_RESULT objTrackGetObjects(XGL_OBJECT_TYPE type, uint64_t objCount, OBJTRACK_NODE* pObjNodeArray);
+uint64_t objTrackGetObjectCount(VK_OBJECT_TYPE type);
+VK_RESULT objTrackGetObjects(VK_OBJECT_TYPE type, uint64_t objCount, OBJTRACK_NODE* pObjNodeArray);
// Func ptr typedefs
-typedef uint64_t (*OBJ_TRACK_GET_OBJECT_COUNT)(XGL_OBJECT_TYPE);
-typedef XGL_RESULT (*OBJ_TRACK_GET_OBJECTS)(XGL_OBJECT_TYPE, uint64_t, OBJTRACK_NODE*);
+typedef uint64_t (*OBJ_TRACK_GET_OBJECT_COUNT)(VK_OBJECT_TYPE);
+typedef VK_RESULT (*OBJ_TRACK_GET_OBJECTS)(VK_OBJECT_TYPE, uint64_t, OBJTRACK_NODE*);
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include <sstream>
#include "loader_platform.h"
-#include "xglLayer.h"
+#include "vkLayer.h"
#include "layers_config.h"
-#include "xgl_enum_validate_helper.h"
-#include "xgl_struct_validate_helper.h"
+#include "vk_enum_validate_helper.h"
+#include "vk_struct_validate_helper.h"
//The following is #included again to catch certain OS-specific functions being used:
#include "loader_platform.h"
#include "layers_msg.h"
-static XGL_LAYER_DISPATCH_TABLE nextTable;
-static XGL_BASE_LAYER_OBJECT *pCurObj;
+static VK_LAYER_DISPATCH_TABLE nextTable;
+static VK_BASE_LAYER_OBJECT *pCurObj;
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);
-#include "xgl_dispatch_table_helper.h"
+#include "vk_dispatch_table_helper.h"
static void initParamChecker(void)
{
getLayerOptionEnum("ParamCheckerReportLevel", (uint32_t *) &g_reportingLevel);
g_actionIsDefault = getLayerOptionEnum("ParamCheckerDebugAction", (uint32_t *) &g_debugAction);
- if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)
+ if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)
{
strOpt = getLayerOption("ParamCheckerLogFilename");
if (strOpt)
g_logFile = stdout;
}
- xglGetProcAddrType fpNextGPA;
+ vkGetProcAddrType fpNextGPA;
fpNextGPA = pCurObj->pGPA;
assert(fpNextGPA);
- layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);
+ layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);
}
-void PreCreateInstance(const XGL_APPLICATION_INFO* pAppInfo, const XGL_ALLOC_CALLBACKS* pAllocCb)
+void PreCreateInstance(const VK_APPLICATION_INFO* pAppInfo, const VK_ALLOC_CALLBACKS* pAllocCb)
{
if(pAppInfo == nullptr)
{
- char const str[] = "xglCreateInstance parameter, XGL_APPLICATION_INFO* pAppInfo, is "\
+ char const str[] = "vkCreateInstance parameter, VK_APPLICATION_INFO* pAppInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pAppInfo->sType != XGL_STRUCTURE_TYPE_APPLICATION_INFO)
+ if(pAppInfo->sType != VK_STRUCTURE_TYPE_APPLICATION_INFO)
{
- char const str[] = "xglCreateInstance parameter, XGL_STRUCTURE_TYPE_APPLICATION_INFO "\
- "pAppInfo->sType, is not XGL_STRUCTURE_TYPE_APPLICATION_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateInstance parameter, VK_STRUCTURE_TYPE_APPLICATION_INFO "\
+ "pAppInfo->sType, is not VK_STRUCTURE_TYPE_APPLICATION_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
// TODO: What else can validated in pAppInfo?
- // TODO: XGL_API_VERSION validation.
+ // TODO: VK_API_VERSION validation.
// It's okay if pAllocCb is a nullptr.
if(pAllocCb != nullptr)
{
- if(!xgl_validate_xgl_alloc_callbacks(pAllocCb))
+ if(!vk_validate_vk_alloc_callbacks(pAllocCb))
{
- char const str[] = "xglCreateInstance parameter, XGL_ALLOC_CALLBACKS* pAllocCb, "\
+ char const str[] = "vkCreateInstance parameter, VK_ALLOC_CALLBACKS* pAllocCb, "\
"contains an invalid value (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
}
-void PostCreateInstance(XGL_RESULT result, XGL_INSTANCE* pInstance)
+void PostCreateInstance(VK_RESULT result, VK_INSTANCE* pInstance)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateInstance failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateInstance failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pInstance == nullptr)
{
- char const str[] = "xglCreateInstance parameter, XGL_INSTANCE* pInstance, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateInstance parameter, VK_INSTANCE* pInstance, is nullptr "\
+ "(postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(const XGL_INSTANCE_CREATE_INFO* pCreateInfo, XGL_INSTANCE* pInstance)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateInstance(const VK_INSTANCE_CREATE_INFO* pCreateInfo, VK_INSTANCE* pInstance)
{
PreCreateInstance(pCreateInfo->pAppInfo, pCreateInfo->pAllocCb);
- XGL_RESULT result = nextTable.CreateInstance(pCreateInfo, pInstance);
+ VK_RESULT result = nextTable.CreateInstance(pCreateInfo, pInstance);
PostCreateInstance(result, pInstance);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(XGL_INSTANCE instance)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyInstance(VK_INSTANCE instance)
{
- XGL_RESULT result = nextTable.DestroyInstance(instance);
+ VK_RESULT result = nextTable.DestroyInstance(instance);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(XGL_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, XGL_PHYSICAL_GPU* pGpus)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateGpus(VK_INSTANCE instance, uint32_t maxGpus, uint32_t* pGpuCount, VK_PHYSICAL_GPU* pGpus)
{
- XGL_RESULT result = nextTable.EnumerateGpus(instance, maxGpus, pGpuCount, pGpus);
+ VK_RESULT result = nextTable.EnumerateGpus(instance, maxGpus, pGpuCount, pGpus);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetGpuInfo(XGL_PHYSICAL_GPU gpu, XGL_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetGpuInfo(VK_PHYSICAL_GPU gpu, VK_PHYSICAL_GPU_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
char str[1024];
- if (!validate_XGL_PHYSICAL_GPU_INFO_TYPE(infoType)) {
+ if (!validate_VK_PHYSICAL_GPU_INFO_TYPE(infoType)) {
sprintf(str, "Parameter infoType to function GetGpuInfo has invalid value of %i.", (int)infoType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.GetGpuInfo((XGL_PHYSICAL_GPU)gpuw->nextObject, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetGpuInfo((VK_PHYSICAL_GPU)gpuw->nextObject, infoType, pDataSize, pData);
return result;
}
-void PreCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo)
+void PreCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo)
{
if(gpu == nullptr)
{
- char const str[] = "xglCreateDevice parameter, XGL_PHYSICAL_GPU gpu, is nullptr "\
+ char const str[] = "vkCreateDevice parameter, VK_PHYSICAL_GPU gpu, is nullptr "\
"(precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo == nullptr)
{
- char const str[] = "xglCreateDevice parameter, XGL_DEVICE_CREATE_INFO* pCreateInfo, is "\
+ char const str[] = "vkCreateDevice parameter, VK_DEVICE_CREATE_INFO* pCreateInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pCreateInfo->sType != XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO)
+ if(pCreateInfo->sType != VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO)
{
- char const str[] = "xglCreateDevice parameter, XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO "\
- "pCreateInfo->sType, is not XGL_STRUCTURE_TYPE_DEVICE_CREATE_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateDevice parameter, VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO "\
+ "pCreateInfo->sType, is not VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo->queueRecordCount == 0)
{
- char const str[] = "xglCreateDevice parameter, uint32_t pCreateInfo->queueRecordCount, is "\
+ char const str[] = "vkCreateDevice parameter, uint32_t pCreateInfo->queueRecordCount, is "\
"zero (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo->pRequestedQueues == nullptr)
{
- char const str[] = "xglCreateDevice parameter, XGL_DEVICE_QUEUE_CREATE_INFO* pCreateInfo->pRequestedQueues, is "\
+ char const str[] = "vkCreateDevice parameter, VK_DEVICE_QUEUE_CREATE_INFO* pCreateInfo->pRequestedQueues, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->queueRecordCount; ++i)
{
- if(!xgl_validate_xgl_device_queue_create_info(&(pCreateInfo->pRequestedQueues[i])))
+ if(!vk_validate_vk_device_queue_create_info(&(pCreateInfo->pRequestedQueues[i])))
{
std::stringstream ss;
- ss << "xglCreateDevice parameter, XGL_DEVICE_QUEUE_CREATE_INFO pCreateInfo->pRequestedQueues[" << i <<
+ ss << "vkCreateDevice parameter, VK_DEVICE_QUEUE_CREATE_INFO pCreateInfo->pRequestedQueues[" << i <<
"], is invalid (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
- if(!validate_XGL_VALIDATION_LEVEL(pCreateInfo->maxValidationLevel))
+ if(!validate_VK_VALIDATION_LEVEL(pCreateInfo->maxValidationLevel))
{
- char const str[] = "xglCreateDevice parameter, XGL_VALIDATION_LEVEL pCreateInfo->maxValidationLevel, is "\
+ char const str[] = "vkCreateDevice parameter, VK_VALIDATION_LEVEL pCreateInfo->maxValidationLevel, is "\
"unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-void PostCreateDevice(XGL_RESULT result, XGL_DEVICE* pDevice)
+void PostCreateDevice(VK_RESULT result, VK_DEVICE* pDevice)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateDevice failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateDevice failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pDevice == nullptr)
{
- char const str[] = "xglCreateDevice parameter, XGL_DEVICE* pDevice, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateDevice parameter, VK_DEVICE* pDevice, is nullptr (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDevice(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo, XGL_DEVICE* pDevice)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDevice(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo, VK_DEVICE* pDevice)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
PreCreateDevice(gpu, pCreateInfo);
- XGL_RESULT result = nextTable.CreateDevice((XGL_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
+ VK_RESULT result = nextTable.CreateDevice((VK_PHYSICAL_GPU)gpuw->nextObject, pCreateInfo, pDevice);
PostCreateDevice(result, pDevice);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyDevice(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyDevice(VK_DEVICE device)
{
- XGL_RESULT result = nextTable.DestroyDevice(device);
+ VK_RESULT result = nextTable.DestroyDevice(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
- XGL_RESULT result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);
+ VK_RESULT result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
char str[1024];
if (gpu != NULL) {
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
sprintf(str, "At start of layered EnumerateLayers\n");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, nullptr, 0, 0, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, nullptr, 0, 0, "PARAMCHECK", str);
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
- XGL_RESULT result = nextTable.EnumerateLayers((XGL_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
+ VK_RESULT result = nextTable.EnumerateLayers((VK_PHYSICAL_GPU)gpuw->nextObject, maxLayerCount, maxStringSize, pOutLayerCount, pOutLayers, pReserved);
sprintf(str, "Completed layered EnumerateLayers\n");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, nullptr, 0, 0, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, nullptr, 0, 0, "PARAMCHECK", str);
fflush(stdout);
return result;
} else {
if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
// This layer compatible with all GPUs
*pOutLayerCount = 1;
strncpy(pOutLayers[0], "ParamChecker", maxStringSize);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetDeviceQueue(XGL_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, XGL_QUEUE* pQueue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetDeviceQueue(VK_DEVICE device, uint32_t queueNodeIndex, uint32_t queueIndex, VK_QUEUE* pQueue)
{
- XGL_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
+ VK_RESULT result = nextTable.GetDeviceQueue(device, queueNodeIndex, queueIndex, pQueue);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSubmit(XGL_QUEUE queue, uint32_t cmdBufferCount, const XGL_CMD_BUFFER* pCmdBuffers, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSubmit(VK_QUEUE queue, uint32_t cmdBufferCount, const VK_CMD_BUFFER* pCmdBuffers, VK_FENCE fence)
{
- XGL_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
+ VK_RESULT result = nextTable.QueueSubmit(queue, cmdBufferCount, pCmdBuffers, fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueAddMemReference(XGL_QUEUE queue, XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueAddMemReference(VK_QUEUE queue, VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.QueueAddMemReference(queue, mem);
+ VK_RESULT result = nextTable.QueueAddMemReference(queue, mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueRemoveMemReference(XGL_QUEUE queue, XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueRemoveMemReference(VK_QUEUE queue, VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.QueueRemoveMemReference(queue, mem);
+ VK_RESULT result = nextTable.QueueRemoveMemReference(queue, mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitIdle(XGL_QUEUE queue)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitIdle(VK_QUEUE queue)
{
- XGL_RESULT result = nextTable.QueueWaitIdle(queue);
+ VK_RESULT result = nextTable.QueueWaitIdle(queue);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDeviceWaitIdle(XGL_DEVICE device)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDeviceWaitIdle(VK_DEVICE device)
{
- XGL_RESULT result = nextTable.DeviceWaitIdle(device);
+ VK_RESULT result = nextTable.DeviceWaitIdle(device);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocMemory(XGL_DEVICE device, const XGL_MEMORY_ALLOC_INFO* pAllocInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocMemory(VK_DEVICE device, const VK_MEMORY_ALLOC_INFO* pAllocInfo, VK_GPU_MEMORY* pMem)
{
char str[1024];
if (!pAllocInfo) {
sprintf(str, "Struct ptr parameter pAllocInfo to function AllocMemory is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_memory_alloc_info(pAllocInfo)) {
+ else if (!vk_validate_vk_memory_alloc_info(pAllocInfo)) {
sprintf(str, "Parameter pAllocInfo to function AllocMemory contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
+ VK_RESULT result = nextTable.AllocMemory(device, pAllocInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglFreeMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkFreeMemory(VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.FreeMemory(mem);
+ VK_RESULT result = nextTable.FreeMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetMemoryPriority(XGL_GPU_MEMORY mem, XGL_MEMORY_PRIORITY priority)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetMemoryPriority(VK_GPU_MEMORY mem, VK_MEMORY_PRIORITY priority)
{
char str[1024];
- if (!validate_XGL_MEMORY_PRIORITY(priority)) {
+ if (!validate_VK_MEMORY_PRIORITY(priority)) {
sprintf(str, "Parameter priority to function SetMemoryPriority has invalid value of %i.", (int)priority);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.SetMemoryPriority(mem, priority);
+ VK_RESULT result = nextTable.SetMemoryPriority(mem, priority);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglMapMemory(XGL_GPU_MEMORY mem, XGL_FLAGS flags, void** ppData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkMapMemory(VK_GPU_MEMORY mem, VK_FLAGS flags, void** ppData)
{
- XGL_RESULT result = nextTable.MapMemory(mem, flags, ppData);
+ VK_RESULT result = nextTable.MapMemory(mem, flags, ppData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglUnmapMemory(XGL_GPU_MEMORY mem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkUnmapMemory(VK_GPU_MEMORY mem)
{
- XGL_RESULT result = nextTable.UnmapMemory(mem);
+ VK_RESULT result = nextTable.UnmapMemory(mem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglPinSystemMemory(XGL_DEVICE device, const void* pSysMem, size_t memSize, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkPinSystemMemory(VK_DEVICE device, const void* pSysMem, size_t memSize, VK_GPU_MEMORY* pMem)
{
- XGL_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
+ VK_RESULT result = nextTable.PinSystemMemory(device, pSysMem, memSize, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetMultiGpuCompatibility(XGL_PHYSICAL_GPU gpu0, XGL_PHYSICAL_GPU gpu1, XGL_GPU_COMPATIBILITY_INFO* pInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetMultiGpuCompatibility(VK_PHYSICAL_GPU gpu0, VK_PHYSICAL_GPU gpu1, VK_GPU_COMPATIBILITY_INFO* pInfo)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu0;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu0;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
- XGL_RESULT result = nextTable.GetMultiGpuCompatibility((XGL_PHYSICAL_GPU)gpuw->nextObject, gpu1, pInfo);
+ VK_RESULT result = nextTable.GetMultiGpuCompatibility((VK_PHYSICAL_GPU)gpuw->nextObject, gpu1, pInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedMemory(XGL_DEVICE device, const XGL_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedMemory(VK_DEVICE device, const VK_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
char str[1024];
if (!pOpenInfo) {
sprintf(str, "Struct ptr parameter pOpenInfo to function OpenSharedMemory is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_memory_open_info(pOpenInfo)) {
+ else if (!vk_validate_vk_memory_open_info(pOpenInfo)) {
sprintf(str, "Parameter pOpenInfo to function OpenSharedMemory contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenSharedMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenSharedSemaphore(XGL_DEVICE device, const XGL_SEMAPHORE_OPEN_INFO* pOpenInfo, XGL_SEMAPHORE* pSemaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenSharedSemaphore(VK_DEVICE device, const VK_SEMAPHORE_OPEN_INFO* pOpenInfo, VK_SEMAPHORE* pSemaphore)
{
char str[1024];
if (!pOpenInfo) {
sprintf(str, "Struct ptr parameter pOpenInfo to function OpenSharedSemaphore is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_semaphore_open_info(pOpenInfo)) {
+ else if (!vk_validate_vk_semaphore_open_info(pOpenInfo)) {
sprintf(str, "Parameter pOpenInfo to function OpenSharedSemaphore contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.OpenSharedSemaphore(device, pOpenInfo, pSemaphore);
+ VK_RESULT result = nextTable.OpenSharedSemaphore(device, pOpenInfo, pSemaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerMemory(XGL_DEVICE device, const XGL_PEER_MEMORY_OPEN_INFO* pOpenInfo, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerMemory(VK_DEVICE device, const VK_PEER_MEMORY_OPEN_INFO* pOpenInfo, VK_GPU_MEMORY* pMem)
{
char str[1024];
if (!pOpenInfo) {
sprintf(str, "Struct ptr parameter pOpenInfo to function OpenPeerMemory is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_peer_memory_open_info(pOpenInfo)) {
+ else if (!vk_validate_vk_peer_memory_open_info(pOpenInfo)) {
sprintf(str, "Parameter pOpenInfo to function OpenPeerMemory contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
+ VK_RESULT result = nextTable.OpenPeerMemory(device, pOpenInfo, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglOpenPeerImage(XGL_DEVICE device, const XGL_PEER_IMAGE_OPEN_INFO* pOpenInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkOpenPeerImage(VK_DEVICE device, const VK_PEER_IMAGE_OPEN_INFO* pOpenInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
char str[1024];
if (!pOpenInfo) {
sprintf(str, "Struct ptr parameter pOpenInfo to function OpenPeerImage is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_peer_image_open_info(pOpenInfo)) {
+ else if (!vk_validate_vk_peer_image_open_info(pOpenInfo)) {
sprintf(str, "Parameter pOpenInfo to function OpenPeerImage contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
+ VK_RESULT result = nextTable.OpenPeerImage(device, pOpenInfo, pImage, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDestroyObject(XGL_OBJECT object)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDestroyObject(VK_OBJECT object)
{
- XGL_RESULT result = nextTable.DestroyObject(object);
+ VK_RESULT result = nextTable.DestroyObject(object);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetObjectInfo(XGL_BASE_OBJECT object, XGL_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetObjectInfo(VK_BASE_OBJECT object, VK_OBJECT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
char str[1024];
- if (!validate_XGL_OBJECT_INFO_TYPE(infoType)) {
+ if (!validate_VK_OBJECT_INFO_TYPE(infoType)) {
sprintf(str, "Parameter infoType to function GetObjectInfo has invalid value of %i.", (int)infoType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetObjectInfo(object, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemory(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_MEMORY mem, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemory(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_MEMORY mem, VK_GPU_SIZE offset)
{
- XGL_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
+ VK_RESULT result = nextTable.BindObjectMemory(object, allocationIdx, mem, offset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindObjectMemoryRange(XGL_OBJECT object, uint32_t allocationIdx, XGL_GPU_SIZE rangeOffset, XGL_GPU_SIZE rangeSize, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindObjectMemoryRange(VK_OBJECT object, uint32_t allocationIdx, VK_GPU_SIZE rangeOffset, VK_GPU_SIZE rangeSize, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset)
{
- XGL_RESULT result = nextTable.BindObjectMemoryRange(object, allocationIdx, rangeOffset, rangeSize, mem, memOffset);
+ VK_RESULT result = nextTable.BindObjectMemoryRange(object, allocationIdx, rangeOffset, rangeSize, mem, memOffset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBindImageMemoryRange(XGL_IMAGE image, uint32_t allocationIdx, const XGL_IMAGE_MEMORY_BIND_INFO* bindInfo, XGL_GPU_MEMORY mem, XGL_GPU_SIZE memOffset)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBindImageMemoryRange(VK_IMAGE image, uint32_t allocationIdx, const VK_IMAGE_MEMORY_BIND_INFO* bindInfo, VK_GPU_MEMORY mem, VK_GPU_SIZE memOffset)
{
char str[1024];
if (!bindInfo) {
sprintf(str, "Struct ptr parameter bindInfo to function BindImageMemoryRange is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_image_memory_bind_info(bindInfo)) {
+ else if (!vk_validate_vk_image_memory_bind_info(bindInfo)) {
sprintf(str, "Parameter bindInfo to function BindImageMemoryRange contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.BindImageMemoryRange(image, allocationIdx, bindInfo, mem, memOffset);
+ VK_RESULT result = nextTable.BindImageMemoryRange(image, allocationIdx, bindInfo, mem, memOffset);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFence(XGL_DEVICE device, const XGL_FENCE_CREATE_INFO* pCreateInfo, XGL_FENCE* pFence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFence(VK_DEVICE device, const VK_FENCE_CREATE_INFO* pCreateInfo, VK_FENCE* pFence)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateFence is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_fence_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_fence_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateFence contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
+ VK_RESULT result = nextTable.CreateFence(device, pCreateInfo, pFence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFenceStatus(XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFenceStatus(VK_FENCE fence)
{
- XGL_RESULT result = nextTable.GetFenceStatus(fence);
+ VK_RESULT result = nextTable.GetFenceStatus(fence);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWaitForFences(XGL_DEVICE device, uint32_t fenceCount, const XGL_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWaitForFences(VK_DEVICE device, uint32_t fenceCount, const VK_FENCE* pFences, bool32_t waitAll, uint64_t timeout)
{
- XGL_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
+ VK_RESULT result = nextTable.WaitForFences(device, fenceCount, pFences, waitAll, timeout);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetFences(XGL_DEVICE device, uint32_t fenceCount, XGL_FENCE* pFences)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetFences(VK_DEVICE device, uint32_t fenceCount, VK_FENCE* pFences)
{
- XGL_RESULT result = nextTable.ResetFences(device, fenceCount, pFences);
+ VK_RESULT result = nextTable.ResetFences(device, fenceCount, pFences);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSemaphore(XGL_DEVICE device, const XGL_SEMAPHORE_CREATE_INFO* pCreateInfo, XGL_SEMAPHORE* pSemaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSemaphore(VK_DEVICE device, const VK_SEMAPHORE_CREATE_INFO* pCreateInfo, VK_SEMAPHORE* pSemaphore)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateSemaphore is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_semaphore_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_semaphore_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateSemaphore contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateSemaphore(device, pCreateInfo, pSemaphore);
+ VK_RESULT result = nextTable.CreateSemaphore(device, pCreateInfo, pSemaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueSignalSemaphore(XGL_QUEUE queue, XGL_SEMAPHORE semaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueSignalSemaphore(VK_QUEUE queue, VK_SEMAPHORE semaphore)
{
- XGL_RESULT result = nextTable.QueueSignalSemaphore(queue, semaphore);
+ VK_RESULT result = nextTable.QueueSignalSemaphore(queue, semaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglQueueWaitSemaphore(XGL_QUEUE queue, XGL_SEMAPHORE semaphore)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkQueueWaitSemaphore(VK_QUEUE queue, VK_SEMAPHORE semaphore)
{
- XGL_RESULT result = nextTable.QueueWaitSemaphore(queue, semaphore);
+ VK_RESULT result = nextTable.QueueWaitSemaphore(queue, semaphore);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateEvent(XGL_DEVICE device, const XGL_EVENT_CREATE_INFO* pCreateInfo, XGL_EVENT* pEvent)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateEvent(VK_DEVICE device, const VK_EVENT_CREATE_INFO* pCreateInfo, VK_EVENT* pEvent)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateEvent is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_event_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_event_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateEvent contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
+ VK_RESULT result = nextTable.CreateEvent(device, pCreateInfo, pEvent);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetEventStatus(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetEventStatus(VK_EVENT event)
{
- XGL_RESULT result = nextTable.GetEventStatus(event);
+ VK_RESULT result = nextTable.GetEventStatus(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglSetEvent(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkSetEvent(VK_EVENT event)
{
- XGL_RESULT result = nextTable.SetEvent(event);
+ VK_RESULT result = nextTable.SetEvent(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetEvent(XGL_EVENT event)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetEvent(VK_EVENT event)
{
- XGL_RESULT result = nextTable.ResetEvent(event);
+ VK_RESULT result = nextTable.ResetEvent(event);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateQueryPool(XGL_DEVICE device, const XGL_QUERY_POOL_CREATE_INFO* pCreateInfo, XGL_QUERY_POOL* pQueryPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateQueryPool(VK_DEVICE device, const VK_QUERY_POOL_CREATE_INFO* pCreateInfo, VK_QUERY_POOL* pQueryPool)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateQueryPool is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_query_pool_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_query_pool_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateQueryPool contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
+ VK_RESULT result = nextTable.CreateQueryPool(device, pCreateInfo, pQueryPool);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetQueryPoolResults(XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetQueryPoolResults(VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount, size_t* pDataSize, void* pData)
{
- XGL_RESULT result = nextTable.GetQueryPoolResults(queryPool, startQuery, queryCount, pDataSize, pData);
+ VK_RESULT result = nextTable.GetQueryPoolResults(queryPool, startQuery, queryCount, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetFormatInfo(XGL_DEVICE device, XGL_FORMAT format, XGL_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetFormatInfo(VK_DEVICE device, VK_FORMAT format, VK_FORMAT_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
char str[1024];
- if (!validate_XGL_FORMAT(format)) {
+ if (!validate_VK_FORMAT(format)) {
sprintf(str, "Parameter format to function GetFormatInfo has invalid value of %i.", (int)format);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_FORMAT_INFO_TYPE(infoType)) {
+ if (!validate_VK_FORMAT_INFO_TYPE(infoType)) {
sprintf(str, "Parameter infoType to function GetFormatInfo has invalid value of %i.", (int)infoType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.GetFormatInfo(device, format, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetFormatInfo(device, format, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBuffer(XGL_DEVICE device, const XGL_BUFFER_CREATE_INFO* pCreateInfo, XGL_BUFFER* pBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBuffer(VK_DEVICE device, const VK_BUFFER_CREATE_INFO* pCreateInfo, VK_BUFFER* pBuffer)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateBuffer is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_buffer_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_buffer_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateBuffer contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
+ VK_RESULT result = nextTable.CreateBuffer(device, pCreateInfo, pBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateBufferView(XGL_DEVICE device, const XGL_BUFFER_VIEW_CREATE_INFO* pCreateInfo, XGL_BUFFER_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateBufferView(VK_DEVICE device, const VK_BUFFER_VIEW_CREATE_INFO* pCreateInfo, VK_BUFFER_VIEW* pView)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateBufferView is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_buffer_view_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_buffer_view_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateBufferView contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
+ VK_RESULT result = nextTable.CreateBufferView(device, pCreateInfo, pView);
return result;
}
-void PreCreateImage(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo)
+void PreCreateImage(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo)
{
if(pCreateInfo == nullptr)
{
- char const str[] = "xglCreateImage parameter, XGL_IMAGE_CREATE_INFO* pCreateInfo, is "\
+ char const str[] = "vkCreateImage parameter, VK_IMAGE_CREATE_INFO* pCreateInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pCreateInfo->sType != XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO)
+ if(pCreateInfo->sType != VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO)
{
- char const str[] = "xglCreateImage parameter, XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO "\
- "pCreateInfo->sType, is not XGL_STRUCTURE_TYPE_IMAGE_CREATE_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateImage parameter, VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO "\
+ "pCreateInfo->sType, is not VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if (!validate_XGL_IMAGE_TYPE(pCreateInfo->imageType))
+ if (!validate_VK_IMAGE_TYPE(pCreateInfo->imageType))
{
- char const str[] = "xglCreateImage parameter, XGL_IMAGE_TYPE pCreateInfo->imageType, is "\
+ char const str[] = "vkCreateImage parameter, VK_IMAGE_TYPE pCreateInfo->imageType, is "\
"unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if (!validate_XGL_FORMAT(pCreateInfo->format))
+ if (!validate_VK_FORMAT(pCreateInfo->format))
{
- char const str[] = "xglCreateImage parameter, XGL_FORMAT pCreateInfo->format, is "\
+ char const str[] = "vkCreateImage parameter, VK_FORMAT pCreateInfo->format, is "\
"unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- XGL_FORMAT_PROPERTIES properties;
+ VK_FORMAT_PROPERTIES properties;
size_t size = sizeof(properties);
- XGL_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->format,
- XGL_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
- if(result != XGL_SUCCESS)
+ VK_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->format,
+ VK_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
+ if(result != VK_SUCCESS)
{
- char const str[] = "xglCreateImage parameter, XGL_FORMAT pCreateInfo->format, cannot be "\
+ char const str[] = "vkCreateImage parameter, VK_FORMAT pCreateInfo->format, cannot be "\
"validated (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0))
{
- char const str[] = "xglCreateImage parameter, XGL_FORMAT pCreateInfo->format, contains "\
+ char const str[] = "vkCreateImage parameter, VK_FORMAT pCreateInfo->format, contains "\
"unsupported format (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
// TODO: Can we check device-specific limits?
- if (!xgl_validate_xgl_extent3d(&pCreateInfo->extent))
+ if (!vk_validate_vk_extent3d(&pCreateInfo->extent))
{
- char const str[] = "xglCreateImage parameter, XGL_EXTENT3D pCreateInfo->extent, is invalid "\
+ char const str[] = "vkCreateImage parameter, VK_EXTENT3D pCreateInfo->extent, is invalid "\
"(precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if (!validate_XGL_IMAGE_TILING(pCreateInfo->tiling))
+ if (!validate_VK_IMAGE_TILING(pCreateInfo->tiling))
{
- char const str[] = "xglCreateImage parameter, XGL_IMAGE_TILING pCreateInfo->tiling, is "\
+ char const str[] = "vkCreateImage parameter, VK_IMAGE_TILING pCreateInfo->tiling, is "\
"unrecoginized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-void PostCreateImage(XGL_RESULT result, XGL_IMAGE* pImage)
+void PostCreateImage(VK_RESULT result, VK_IMAGE* pImage)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateImage failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateImage failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pImage == nullptr)
{
- char const str[] = "xglCreateImage parameter, XGL_IMAGE* pImage, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateImage parameter, VK_IMAGE* pImage, is nullptr (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImage(XGL_DEVICE device, const XGL_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImage(VK_DEVICE device, const VK_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage)
{
PreCreateImage(device, pCreateInfo);
- XGL_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
+ VK_RESULT result = nextTable.CreateImage(device, pCreateInfo, pImage);
PostCreateImage(result, pImage);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetImageSubresourceInfo(XGL_IMAGE image, const XGL_IMAGE_SUBRESOURCE* pSubresource, XGL_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkGetImageSubresourceInfo(VK_IMAGE image, const VK_IMAGE_SUBRESOURCE* pSubresource, VK_SUBRESOURCE_INFO_TYPE infoType, size_t* pDataSize, void* pData)
{
char str[1024];
if (!pSubresource) {
sprintf(str, "Struct ptr parameter pSubresource to function GetImageSubresourceInfo is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_image_subresource(pSubresource)) {
+ else if (!vk_validate_vk_image_subresource(pSubresource)) {
sprintf(str, "Parameter pSubresource to function GetImageSubresourceInfo contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_SUBRESOURCE_INFO_TYPE(infoType)) {
+ if (!validate_VK_SUBRESOURCE_INFO_TYPE(infoType)) {
sprintf(str, "Parameter infoType to function GetImageSubresourceInfo has invalid value of %i.", (int)infoType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.GetImageSubresourceInfo(image, pSubresource, infoType, pDataSize, pData);
+ VK_RESULT result = nextTable.GetImageSubresourceInfo(image, pSubresource, infoType, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateImageView(XGL_DEVICE device, const XGL_IMAGE_VIEW_CREATE_INFO* pCreateInfo, XGL_IMAGE_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateImageView(VK_DEVICE device, const VK_IMAGE_VIEW_CREATE_INFO* pCreateInfo, VK_IMAGE_VIEW* pView)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateImageView is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_image_view_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_image_view_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateImageView contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
+ VK_RESULT result = nextTable.CreateImageView(device, pCreateInfo, pView);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateColorAttachmentView(XGL_DEVICE device, const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, XGL_COLOR_ATTACHMENT_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateColorAttachmentView(VK_DEVICE device, const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO* pCreateInfo, VK_COLOR_ATTACHMENT_VIEW* pView)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateColorAttachmentView is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_color_attachment_view_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_color_attachment_view_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateColorAttachmentView contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
+ VK_RESULT result = nextTable.CreateColorAttachmentView(device, pCreateInfo, pView);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDepthStencilView(XGL_DEVICE device, const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, XGL_DEPTH_STENCIL_VIEW* pView)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDepthStencilView(VK_DEVICE device, const VK_DEPTH_STENCIL_VIEW_CREATE_INFO* pCreateInfo, VK_DEPTH_STENCIL_VIEW* pView)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDepthStencilView is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_depth_stencil_view_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_depth_stencil_view_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDepthStencilView contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
+ VK_RESULT result = nextTable.CreateDepthStencilView(device, pCreateInfo, pView);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateShader(XGL_DEVICE device, const XGL_SHADER_CREATE_INFO* pCreateInfo, XGL_SHADER* pShader)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateShader(VK_DEVICE device, const VK_SHADER_CREATE_INFO* pCreateInfo, VK_SHADER* pShader)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateShader is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_shader_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_shader_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateShader contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
+ VK_RESULT result = nextTable.CreateShader(device, pCreateInfo, pShader);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipeline(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipeline(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateGraphicsPipeline is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_graphics_pipeline_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_graphics_pipeline_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateGraphicsPipeline contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
+ VK_RESULT result = nextTable.CreateGraphicsPipeline(device, pCreateInfo, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateGraphicsPipelineDerivative(XGL_DEVICE device, const XGL_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE basePipeline, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateGraphicsPipelineDerivative(VK_DEVICE device, const VK_GRAPHICS_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE basePipeline, VK_PIPELINE* pPipeline)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateGraphicsPipelineDerivative is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_graphics_pipeline_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_graphics_pipeline_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateGraphicsPipelineDerivative contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
+ VK_RESULT result = nextTable.CreateGraphicsPipelineDerivative(device, pCreateInfo, basePipeline, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateComputePipeline(XGL_DEVICE device, const XGL_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateComputePipeline(VK_DEVICE device, const VK_COMPUTE_PIPELINE_CREATE_INFO* pCreateInfo, VK_PIPELINE* pPipeline)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateComputePipeline is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_compute_pipeline_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_compute_pipeline_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateComputePipeline contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
+ VK_RESULT result = nextTable.CreateComputePipeline(device, pCreateInfo, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglStorePipeline(XGL_PIPELINE pipeline, size_t* pDataSize, void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkStorePipeline(VK_PIPELINE pipeline, size_t* pDataSize, void* pData)
{
- XGL_RESULT result = nextTable.StorePipeline(pipeline, pDataSize, pData);
+ VK_RESULT result = nextTable.StorePipeline(pipeline, pDataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglLoadPipeline(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkLoadPipeline(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.LoadPipeline(device, dataSize, pData, pPipeline);
+ VK_RESULT result = nextTable.LoadPipeline(device, dataSize, pData, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglLoadPipelineDerivative(XGL_DEVICE device, size_t dataSize, const void* pData, XGL_PIPELINE basePipeline, XGL_PIPELINE* pPipeline)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkLoadPipelineDerivative(VK_DEVICE device, size_t dataSize, const void* pData, VK_PIPELINE basePipeline, VK_PIPELINE* pPipeline)
{
- XGL_RESULT result = nextTable.LoadPipelineDerivative(device, dataSize, pData, basePipeline, pPipeline);
+ VK_RESULT result = nextTable.LoadPipelineDerivative(device, dataSize, pData, basePipeline, pPipeline);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateSampler(XGL_DEVICE device, const XGL_SAMPLER_CREATE_INFO* pCreateInfo, XGL_SAMPLER* pSampler)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateSampler(VK_DEVICE device, const VK_SAMPLER_CREATE_INFO* pCreateInfo, VK_SAMPLER* pSampler)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateSampler is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_sampler_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_sampler_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateSampler contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
+ VK_RESULT result = nextTable.CreateSampler(device, pCreateInfo, pSampler);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayout(XGL_DEVICE device, const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_SET_LAYOUT* pSetLayout)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayout(VK_DEVICE device, const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_SET_LAYOUT* pSetLayout)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDescriptorSetLayout is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_descriptor_set_layout_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_descriptor_set_layout_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDescriptorSetLayout contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
+ VK_RESULT result = nextTable.CreateDescriptorSetLayout(device, pCreateInfo, pSetLayout);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorSetLayoutChain(XGL_DEVICE device, uint32_t setLayoutArrayCount, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorSetLayoutChain(VK_DEVICE device, uint32_t setLayoutArrayCount, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayoutArray, VK_DESCRIPTOR_SET_LAYOUT_CHAIN* pLayoutChain)
{
- XGL_RESULT result = nextTable.CreateDescriptorSetLayoutChain(device, setLayoutArrayCount, pSetLayoutArray, pLayoutChain);
+ VK_RESULT result = nextTable.CreateDescriptorSetLayoutChain(device, setLayoutArrayCount, pSetLayoutArray, pLayoutChain);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginDescriptorPoolUpdate(XGL_DEVICE device, XGL_DESCRIPTOR_UPDATE_MODE updateMode)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginDescriptorPoolUpdate(VK_DEVICE device, VK_DESCRIPTOR_UPDATE_MODE updateMode)
{
char str[1024];
- if (!validate_XGL_DESCRIPTOR_UPDATE_MODE(updateMode)) {
+ if (!validate_VK_DESCRIPTOR_UPDATE_MODE(updateMode)) {
sprintf(str, "Parameter updateMode to function BeginDescriptorPoolUpdate has invalid value of %i.", (int)updateMode);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
+ VK_RESULT result = nextTable.BeginDescriptorPoolUpdate(device, updateMode);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndDescriptorPoolUpdate(XGL_DEVICE device, XGL_CMD_BUFFER cmd)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndDescriptorPoolUpdate(VK_DEVICE device, VK_CMD_BUFFER cmd)
{
- XGL_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
+ VK_RESULT result = nextTable.EndDescriptorPoolUpdate(device, cmd);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDescriptorPool(XGL_DEVICE device, XGL_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const XGL_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, XGL_DESCRIPTOR_POOL* pDescriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDescriptorPool(VK_DEVICE device, VK_DESCRIPTOR_POOL_USAGE poolUsage, uint32_t maxSets, const VK_DESCRIPTOR_POOL_CREATE_INFO* pCreateInfo, VK_DESCRIPTOR_POOL* pDescriptorPool)
{
char str[1024];
- if (!validate_XGL_DESCRIPTOR_POOL_USAGE(poolUsage)) {
+ if (!validate_VK_DESCRIPTOR_POOL_USAGE(poolUsage)) {
sprintf(str, "Parameter poolUsage to function CreateDescriptorPool has invalid value of %i.", (int)poolUsage);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDescriptorPool is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_descriptor_pool_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_descriptor_pool_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDescriptorPool contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
+ VK_RESULT result = nextTable.CreateDescriptorPool(device, poolUsage, maxSets, pCreateInfo, pDescriptorPool);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetDescriptorPool(XGL_DESCRIPTOR_POOL descriptorPool)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetDescriptorPool(VK_DESCRIPTOR_POOL descriptorPool)
{
- XGL_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
+ VK_RESULT result = nextTable.ResetDescriptorPool(descriptorPool);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglAllocDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, XGL_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const XGL_DESCRIPTOR_SET_LAYOUT* pSetLayouts, XGL_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkAllocDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, VK_DESCRIPTOR_SET_USAGE setUsage, uint32_t count, const VK_DESCRIPTOR_SET_LAYOUT* pSetLayouts, VK_DESCRIPTOR_SET* pDescriptorSets, uint32_t* pCount)
{
char str[1024];
- if (!validate_XGL_DESCRIPTOR_SET_USAGE(setUsage)) {
+ if (!validate_VK_DESCRIPTOR_SET_USAGE(setUsage)) {
sprintf(str, "Parameter setUsage to function AllocDescriptorSets has invalid value of %i.", (int)setUsage);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
+ VK_RESULT result = nextTable.AllocDescriptorSets(descriptorPool, setUsage, count, pSetLayouts, pDescriptorSets, pCount);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglClearDescriptorSets(XGL_DESCRIPTOR_POOL descriptorPool, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets)
+VK_LAYER_EXPORT void VKAPI vkClearDescriptorSets(VK_DESCRIPTOR_POOL descriptorPool, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets)
{
nextTable.ClearDescriptorSets(descriptorPool, count, pDescriptorSets);
}
-XGL_LAYER_EXPORT void XGLAPI xglUpdateDescriptors(XGL_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
+VK_LAYER_EXPORT void VKAPI vkUpdateDescriptors(VK_DESCRIPTOR_SET descriptorSet, uint32_t updateCount, const void** ppUpdateArray)
{
nextTable.UpdateDescriptors(descriptorSet, updateCount, ppUpdateArray);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicViewportState(XGL_DEVICE device, const XGL_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_VP_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicViewportState(VK_DEVICE device, const VK_DYNAMIC_VP_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_VP_STATE_OBJECT* pState)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDynamicViewportState is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_dynamic_vp_state_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_dynamic_vp_state_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDynamicViewportState contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
+ VK_RESULT result = nextTable.CreateDynamicViewportState(device, pCreateInfo, pState);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicRasterState(XGL_DEVICE device, const XGL_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_RS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicRasterState(VK_DEVICE device, const VK_DYNAMIC_RS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_RS_STATE_OBJECT* pState)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDynamicRasterState is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_dynamic_rs_state_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_dynamic_rs_state_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDynamicRasterState contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
+ VK_RESULT result = nextTable.CreateDynamicRasterState(device, pCreateInfo, pState);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicColorBlendState(XGL_DEVICE device, const XGL_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_CB_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicColorBlendState(VK_DEVICE device, const VK_DYNAMIC_CB_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_CB_STATE_OBJECT* pState)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDynamicColorBlendState is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_dynamic_cb_state_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_dynamic_cb_state_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDynamicColorBlendState contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
+ VK_RESULT result = nextTable.CreateDynamicColorBlendState(device, pCreateInfo, pState);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateDynamicDepthStencilState(XGL_DEVICE device, const XGL_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, XGL_DYNAMIC_DS_STATE_OBJECT* pState)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateDynamicDepthStencilState(VK_DEVICE device, const VK_DYNAMIC_DS_STATE_CREATE_INFO* pCreateInfo, VK_DYNAMIC_DS_STATE_OBJECT* pState)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateDynamicDepthStencilState is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_dynamic_ds_state_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_dynamic_ds_state_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateDynamicDepthStencilState contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
+ VK_RESULT result = nextTable.CreateDynamicDepthStencilState(device, pCreateInfo, pState);
return result;
}
-void PreCreateCommandBuffer(XGL_DEVICE device, const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo)
+void PreCreateCommandBuffer(VK_DEVICE device, const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo)
{
if(device == nullptr)
{
- char const str[] = "xglCreateCommandBuffer parameter, XGL_DEVICE device, is "\
+ char const str[] = "vkCreateCommandBuffer parameter, VK_DEVICE device, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo == nullptr)
{
- char const str[] = "xglCreateCommandBuffer parameter, XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, is "\
+ char const str[] = "vkCreateCommandBuffer parameter, VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pCreateInfo->sType != XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO)
+ if(pCreateInfo->sType != VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO)
{
- char const str[] = "xglCreateCommandBuffer parameter, XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO "\
- "pCreateInfo->sType, is not XGL_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateCommandBuffer parameter, VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO "\
+ "pCreateInfo->sType, is not VK_STRUCTURE_TYPE_CMD_BUFFER_CREATE_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-void PostCreateCommandBuffer(XGL_RESULT result, XGL_CMD_BUFFER* pCmdBuffer)
+void PostCreateCommandBuffer(VK_RESULT result, VK_CMD_BUFFER* pCmdBuffer)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateCommandBuffer failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateCommandBuffer failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCmdBuffer == nullptr)
{
- char const str[] = "xglCreateCommandBuffer parameter, XGL_CMD_BUFFER* pCmdBuffer, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateCommandBuffer parameter, VK_CMD_BUFFER* pCmdBuffer, is nullptr (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateCommandBuffer(XGL_DEVICE device,
- const XGL_CMD_BUFFER_CREATE_INFO* pCreateInfo, XGL_CMD_BUFFER* pCmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateCommandBuffer(VK_DEVICE device,
+ const VK_CMD_BUFFER_CREATE_INFO* pCreateInfo, VK_CMD_BUFFER* pCmdBuffer)
{
PreCreateCommandBuffer(device, pCreateInfo);
- XGL_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
+ VK_RESULT result = nextTable.CreateCommandBuffer(device, pCreateInfo, pCmdBuffer);
PostCreateCommandBuffer(result, pCmdBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglBeginCommandBuffer(XGL_CMD_BUFFER cmdBuffer, const XGL_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkBeginCommandBuffer(VK_CMD_BUFFER cmdBuffer, const VK_CMD_BUFFER_BEGIN_INFO* pBeginInfo)
{
char str[1024];
if (!pBeginInfo) {
sprintf(str, "Struct ptr parameter pBeginInfo to function BeginCommandBuffer is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_cmd_buffer_begin_info(pBeginInfo)) {
+ else if (!vk_validate_vk_cmd_buffer_begin_info(pBeginInfo)) {
sprintf(str, "Parameter pBeginInfo to function BeginCommandBuffer contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
+ VK_RESULT result = nextTable.BeginCommandBuffer(cmdBuffer, pBeginInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglEndCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkEndCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
- XGL_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.EndCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglResetCommandBuffer(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkResetCommandBuffer(VK_CMD_BUFFER cmdBuffer)
{
- XGL_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
+ VK_RESULT result = nextTable.ResetCommandBuffer(cmdBuffer);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindPipeline(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_PIPELINE pipeline)
+VK_LAYER_EXPORT void VKAPI vkCmdBindPipeline(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_PIPELINE pipeline)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdBindPipeline has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBindPipeline(cmdBuffer, pipelineBindPoint, pipeline);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDynamicStateObject(XGL_CMD_BUFFER cmdBuffer, XGL_STATE_BIND_POINT stateBindPoint, XGL_DYNAMIC_STATE_OBJECT state)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDynamicStateObject(VK_CMD_BUFFER cmdBuffer, VK_STATE_BIND_POINT stateBindPoint, VK_DYNAMIC_STATE_OBJECT state)
{
char str[1024];
- if (!validate_XGL_STATE_BIND_POINT(stateBindPoint)) {
+ if (!validate_VK_STATE_BIND_POINT(stateBindPoint)) {
sprintf(str, "Parameter stateBindPoint to function CmdBindDynamicStateObject has invalid value of %i.", (int)stateBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBindDynamicStateObject(cmdBuffer, stateBindPoint, state);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindDescriptorSets(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, XGL_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const XGL_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
+VK_LAYER_EXPORT void VKAPI vkCmdBindDescriptorSets(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, VK_DESCRIPTOR_SET_LAYOUT_CHAIN layoutChain, uint32_t layoutChainSlot, uint32_t count, const VK_DESCRIPTOR_SET* pDescriptorSets, const uint32_t* pUserData)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdBindDescriptorSets has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBindDescriptorSets(cmdBuffer, pipelineBindPoint, layoutChain, layoutChainSlot, count, pDescriptorSets, pUserData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindVertexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t binding)
+VK_LAYER_EXPORT void VKAPI vkCmdBindVertexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t binding)
{
nextTable.CmdBindVertexBuffer(cmdBuffer, buffer, offset, binding);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBindIndexBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, XGL_INDEX_TYPE indexType)
+VK_LAYER_EXPORT void VKAPI vkCmdBindIndexBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, VK_INDEX_TYPE indexType)
{
char str[1024];
- if (!validate_XGL_INDEX_TYPE(indexType)) {
+ if (!validate_VK_INDEX_TYPE(indexType)) {
sprintf(str, "Parameter indexType to function CmdBindIndexBuffer has invalid value of %i.", (int)indexType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBindIndexBuffer(cmdBuffer, buffer, offset, indexType);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDraw(XGL_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDraw(VK_CMD_BUFFER cmdBuffer, uint32_t firstVertex, uint32_t vertexCount, uint32_t firstInstance, uint32_t instanceCount)
{
nextTable.CmdDraw(cmdBuffer, firstVertex, vertexCount, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexed(XGL_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexed(VK_CMD_BUFFER cmdBuffer, uint32_t firstIndex, uint32_t indexCount, int32_t vertexOffset, uint32_t firstInstance, uint32_t instanceCount)
{
nextTable.CmdDrawIndexed(cmdBuffer, firstIndex, indexCount, vertexOffset, firstInstance, instanceCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
nextTable.CmdDrawIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDrawIndexedIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset, uint32_t count, uint32_t stride)
+VK_LAYER_EXPORT void VKAPI vkCmdDrawIndexedIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset, uint32_t count, uint32_t stride)
{
nextTable.CmdDrawIndexedIndirect(cmdBuffer, buffer, offset, count, stride);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatch(XGL_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatch(VK_CMD_BUFFER cmdBuffer, uint32_t x, uint32_t y, uint32_t z)
{
nextTable.CmdDispatch(cmdBuffer, x, y, z);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDispatchIndirect(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER buffer, XGL_GPU_SIZE offset)
+VK_LAYER_EXPORT void VKAPI vkCmdDispatchIndirect(VK_CMD_BUFFER cmdBuffer, VK_BUFFER buffer, VK_GPU_SIZE offset)
{
nextTable.CmdDispatchIndirect(cmdBuffer, buffer, offset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_COPY* pRegions)
{
char str[1024];
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_buffer_copy(&pRegions[i])) {
+ if (!vk_validate_vk_buffer_copy(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdCopyBuffer contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdCopyBuffer(cmdBuffer, srcBuffer, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_COPY* pRegions)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdCopyImage has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdCopyImage has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_image_copy(&pRegions[i])) {
+ if (!vk_validate_vk_image_copy(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdCopyImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdCopyImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBlitImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_IMAGE_BLIT* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdBlitImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_IMAGE_BLIT* pRegions)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdBlitImage has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdBlitImage has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_image_blit(&pRegions[i])) {
+ if (!vk_validate_vk_image_blit(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdBlitImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdBlitImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyBufferToImage(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER srcBuffer, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyBufferToImage(VK_CMD_BUFFER cmdBuffer, VK_BUFFER srcBuffer, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdCopyBufferToImage has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_buffer_image_copy(&pRegions[i])) {
+ if (!vk_validate_vk_buffer_image_copy(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdCopyBufferToImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdCopyBufferToImage(cmdBuffer, srcBuffer, destImage, destImageLayout, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCopyImageToBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_BUFFER destBuffer, uint32_t regionCount, const XGL_BUFFER_IMAGE_COPY* pRegions)
+VK_LAYER_EXPORT void VKAPI vkCmdCopyImageToBuffer(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_BUFFER destBuffer, uint32_t regionCount, const VK_BUFFER_IMAGE_COPY* pRegions)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdCopyImageToBuffer has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < regionCount; i++) {
- if (!xgl_validate_xgl_buffer_image_copy(&pRegions[i])) {
+ if (!vk_validate_vk_buffer_image_copy(&pRegions[i])) {
sprintf(str, "Parameter pRegions[%i] to function CmdCopyImageToBuffer contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdCopyImageToBuffer(cmdBuffer, srcImage, srcImageLayout, destBuffer, regionCount, pRegions);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdCloneImageData(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout)
+VK_LAYER_EXPORT void VKAPI vkCmdCloneImageData(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdCloneImageData has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdCloneImageData has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdCloneImageData(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdUpdateBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE dataSize, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdUpdateBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE dataSize, const uint32_t* pData)
{
nextTable.CmdUpdateBuffer(cmdBuffer, destBuffer, destOffset, dataSize, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdFillBuffer(XGL_CMD_BUFFER cmdBuffer, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset, XGL_GPU_SIZE fillSize, uint32_t data)
+VK_LAYER_EXPORT void VKAPI vkCmdFillBuffer(VK_CMD_BUFFER cmdBuffer, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset, VK_GPU_SIZE fillSize, uint32_t data)
{
nextTable.CmdFillBuffer(cmdBuffer, destBuffer, destOffset, fillSize, data);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearColorImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, XGL_CLEAR_COLOR color, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearColorImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, VK_CLEAR_COLOR color, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(imageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(imageLayout)) {
sprintf(str, "Parameter imageLayout to function CmdClearColorImage has invalid value of %i.", (int)imageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < rangeCount; i++) {
- if (!xgl_validate_xgl_image_subresource_range(&pRanges[i])) {
+ if (!vk_validate_vk_image_subresource_range(&pRanges[i])) {
sprintf(str, "Parameter pRanges[%i] to function CmdClearColorImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdClearColorImage(cmdBuffer, image, imageLayout, color, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdClearDepthStencil(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE image, XGL_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const XGL_IMAGE_SUBRESOURCE_RANGE* pRanges)
+VK_LAYER_EXPORT void VKAPI vkCmdClearDepthStencil(VK_CMD_BUFFER cmdBuffer, VK_IMAGE image, VK_IMAGE_LAYOUT imageLayout, float depth, uint32_t stencil, uint32_t rangeCount, const VK_IMAGE_SUBRESOURCE_RANGE* pRanges)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(imageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(imageLayout)) {
sprintf(str, "Parameter imageLayout to function CmdClearDepthStencil has invalid value of %i.", (int)imageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < rangeCount; i++) {
- if (!xgl_validate_xgl_image_subresource_range(&pRanges[i])) {
+ if (!vk_validate_vk_image_subresource_range(&pRanges[i])) {
sprintf(str, "Parameter pRanges[%i] to function CmdClearDepthStencil contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdClearDepthStencil(cmdBuffer, image, imageLayout, depth, stencil, rangeCount, pRanges);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResolveImage(XGL_CMD_BUFFER cmdBuffer, XGL_IMAGE srcImage, XGL_IMAGE_LAYOUT srcImageLayout, XGL_IMAGE destImage, XGL_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const XGL_IMAGE_RESOLVE* pRects)
+VK_LAYER_EXPORT void VKAPI vkCmdResolveImage(VK_CMD_BUFFER cmdBuffer, VK_IMAGE srcImage, VK_IMAGE_LAYOUT srcImageLayout, VK_IMAGE destImage, VK_IMAGE_LAYOUT destImageLayout, uint32_t rectCount, const VK_IMAGE_RESOLVE* pRects)
{
char str[1024];
- if (!validate_XGL_IMAGE_LAYOUT(srcImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(srcImageLayout)) {
sprintf(str, "Parameter srcImageLayout to function CmdResolveImage has invalid value of %i.", (int)srcImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- if (!validate_XGL_IMAGE_LAYOUT(destImageLayout)) {
+ if (!validate_VK_IMAGE_LAYOUT(destImageLayout)) {
sprintf(str, "Parameter destImageLayout to function CmdResolveImage has invalid value of %i.", (int)destImageLayout);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
uint32_t i;
for (i = 0; i < rectCount; i++) {
- if (!xgl_validate_xgl_image_resolve(&pRects[i])) {
+ if (!vk_validate_vk_image_resolve(&pRects[i])) {
sprintf(str, "Parameter pRects[%i] to function CmdResolveImage contains an invalid value.", i);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
}
nextTable.CmdResolveImage(cmdBuffer, srcImage, srcImageLayout, destImage, destImageLayout, rectCount, pRects);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdSetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
char str[1024];
- if (!validate_XGL_PIPE_EVENT(pipeEvent)) {
+ if (!validate_VK_PIPE_EVENT(pipeEvent)) {
sprintf(str, "Parameter pipeEvent to function CmdSetEvent has invalid value of %i.", (int)pipeEvent);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdSetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetEvent(XGL_CMD_BUFFER cmdBuffer, XGL_EVENT event, XGL_PIPE_EVENT pipeEvent)
+VK_LAYER_EXPORT void VKAPI vkCmdResetEvent(VK_CMD_BUFFER cmdBuffer, VK_EVENT event, VK_PIPE_EVENT pipeEvent)
{
char str[1024];
- if (!validate_XGL_PIPE_EVENT(pipeEvent)) {
+ if (!validate_VK_PIPE_EVENT(pipeEvent)) {
sprintf(str, "Parameter pipeEvent to function CmdResetEvent has invalid value of %i.", (int)pipeEvent);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdResetEvent(cmdBuffer, event, pipeEvent);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWaitEvents(XGL_CMD_BUFFER cmdBuffer, const XGL_EVENT_WAIT_INFO* pWaitInfo)
+VK_LAYER_EXPORT void VKAPI vkCmdWaitEvents(VK_CMD_BUFFER cmdBuffer, const VK_EVENT_WAIT_INFO* pWaitInfo)
{
char str[1024];
if (!pWaitInfo) {
sprintf(str, "Struct ptr parameter pWaitInfo to function CmdWaitEvents is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_event_wait_info(pWaitInfo)) {
+ else if (!vk_validate_vk_event_wait_info(pWaitInfo)) {
sprintf(str, "Parameter pWaitInfo to function CmdWaitEvents contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdWaitEvents(cmdBuffer, pWaitInfo);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdPipelineBarrier(XGL_CMD_BUFFER cmdBuffer, const XGL_PIPELINE_BARRIER* pBarrier)
+VK_LAYER_EXPORT void VKAPI vkCmdPipelineBarrier(VK_CMD_BUFFER cmdBuffer, const VK_PIPELINE_BARRIER* pBarrier)
{
char str[1024];
if (!pBarrier) {
sprintf(str, "Struct ptr parameter pBarrier to function CmdPipelineBarrier is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_pipeline_barrier(pBarrier)) {
+ else if (!vk_validate_vk_pipeline_barrier(pBarrier)) {
sprintf(str, "Parameter pBarrier to function CmdPipelineBarrier contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdPipelineBarrier(cmdBuffer, pBarrier);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot, XGL_FLAGS flags)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot, VK_FLAGS flags)
{
nextTable.CmdBeginQuery(cmdBuffer, queryPool, slot, flags);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndQuery(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t slot)
+VK_LAYER_EXPORT void VKAPI vkCmdEndQuery(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t slot)
{
nextTable.CmdEndQuery(cmdBuffer, queryPool, slot);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdResetQueryPool(XGL_CMD_BUFFER cmdBuffer, XGL_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
+VK_LAYER_EXPORT void VKAPI vkCmdResetQueryPool(VK_CMD_BUFFER cmdBuffer, VK_QUERY_POOL queryPool, uint32_t startQuery, uint32_t queryCount)
{
nextTable.CmdResetQueryPool(cmdBuffer, queryPool, startQuery, queryCount);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdWriteTimestamp(XGL_CMD_BUFFER cmdBuffer, XGL_TIMESTAMP_TYPE timestampType, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdWriteTimestamp(VK_CMD_BUFFER cmdBuffer, VK_TIMESTAMP_TYPE timestampType, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
char str[1024];
- if (!validate_XGL_TIMESTAMP_TYPE(timestampType)) {
+ if (!validate_VK_TIMESTAMP_TYPE(timestampType)) {
sprintf(str, "Parameter timestampType to function CmdWriteTimestamp has invalid value of %i.", (int)timestampType);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdWriteTimestamp(cmdBuffer, timestampType, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdInitAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
+VK_LAYER_EXPORT void VKAPI vkCmdInitAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, const uint32_t* pData)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdInitAtomicCounters has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdInitAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, pData);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdLoadAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER srcBuffer, XGL_GPU_SIZE srcOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdLoadAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER srcBuffer, VK_GPU_SIZE srcOffset)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdLoadAtomicCounters has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdLoadAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, srcBuffer, srcOffset);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdSaveAtomicCounters(XGL_CMD_BUFFER cmdBuffer, XGL_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, XGL_BUFFER destBuffer, XGL_GPU_SIZE destOffset)
+VK_LAYER_EXPORT void VKAPI vkCmdSaveAtomicCounters(VK_CMD_BUFFER cmdBuffer, VK_PIPELINE_BIND_POINT pipelineBindPoint, uint32_t startCounter, uint32_t counterCount, VK_BUFFER destBuffer, VK_GPU_SIZE destOffset)
{
char str[1024];
- if (!validate_XGL_PIPELINE_BIND_POINT(pipelineBindPoint)) {
+ if (!validate_VK_PIPELINE_BIND_POINT(pipelineBindPoint)) {
sprintf(str, "Parameter pipelineBindPoint to function CmdSaveAtomicCounters has invalid value of %i.", (int)pipelineBindPoint);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdSaveAtomicCounters(cmdBuffer, pipelineBindPoint, startCounter, counterCount, destBuffer, destOffset);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateFramebuffer(XGL_DEVICE device, const XGL_FRAMEBUFFER_CREATE_INFO* pCreateInfo, XGL_FRAMEBUFFER* pFramebuffer)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateFramebuffer(VK_DEVICE device, const VK_FRAMEBUFFER_CREATE_INFO* pCreateInfo, VK_FRAMEBUFFER* pFramebuffer)
{
char str[1024];
if (!pCreateInfo) {
sprintf(str, "Struct ptr parameter pCreateInfo to function CreateFramebuffer is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_framebuffer_create_info(pCreateInfo)) {
+ else if (!vk_validate_vk_framebuffer_create_info(pCreateInfo)) {
sprintf(str, "Parameter pCreateInfo to function CreateFramebuffer contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
+ VK_RESULT result = nextTable.CreateFramebuffer(device, pCreateInfo, pFramebuffer);
return result;
}
-void PreCreateRenderPass(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo)
+void PreCreateRenderPass(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo)
{
if(pCreateInfo == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, is "\
+ char const str[] = "vkCreateRenderPass parameter, VK_RENDER_PASS_CREATE_INFO* pCreateInfo, is "\
"nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(pCreateInfo->sType != XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO)
+ if(pCreateInfo->sType != VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO "\
- "pCreateInfo->sType, is not XGL_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO (precondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateRenderPass parameter, VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO "\
+ "pCreateInfo->sType, is not VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO (precondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!xgl_validate_xgl_rect(&pCreateInfo->renderArea))
+ if(!vk_validate_vk_rect(&pCreateInfo->renderArea))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_RECT pCreateInfo->renderArea, is invalid "\
+ char const str[] = "vkCreateRenderPass parameter, VK_RECT pCreateInfo->renderArea, is invalid "\
"(precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!xgl_validate_xgl_extent2d(&pCreateInfo->extent))
+ if(!vk_validate_vk_extent2d(&pCreateInfo->extent))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_EXTENT2D pCreateInfo->extent, is invalid "\
+ char const str[] = "vkCreateRenderPass parameter, VK_EXTENT2D pCreateInfo->extent, is invalid "\
"(precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo->pColorFormats == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_FORMAT* pCreateInfo->pColorFormats, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_FORMAT* pCreateInfo->pColorFormats, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_FORMAT(pCreateInfo->pColorFormats[i]))
+ if(!validate_VK_FORMAT(pCreateInfo->pColorFormats[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->pColorFormats[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->pColorFormats[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
- XGL_FORMAT_PROPERTIES properties;
+ VK_FORMAT_PROPERTIES properties;
size_t size = sizeof(properties);
- XGL_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->pColorFormats[i],
- XGL_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
- if(result != XGL_SUCCESS)
+ VK_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->pColorFormats[i],
+ VK_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
+ if(result != VK_SUCCESS)
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->pColorFormats[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->pColorFormats[" << i <<
"], cannot be validated (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
if((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->pColorFormats[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->pColorFormats[" << i <<
"], contains unsupported format (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
if(pCreateInfo->pColorLayouts == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_IMAGE_LAYOUT* pCreateInfo->pColorLayouts, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_IMAGE_LAYOUT* pCreateInfo->pColorLayouts, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_IMAGE_LAYOUT(pCreateInfo->pColorLayouts[i]))
+ if(!validate_VK_IMAGE_LAYOUT(pCreateInfo->pColorLayouts[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_IMAGE_LAYOUT pCreateInfo->pColorLayouts[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_IMAGE_LAYOUT pCreateInfo->pColorLayouts[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
if(pCreateInfo->pColorLoadOps == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_LOAD_OP* pCreateInfo->pColorLoadOps, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_LOAD_OP* pCreateInfo->pColorLoadOps, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_ATTACHMENT_LOAD_OP(pCreateInfo->pColorLoadOps[i]))
+ if(!validate_VK_ATTACHMENT_LOAD_OP(pCreateInfo->pColorLoadOps[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_ATTACHMENT_LOAD_OP pCreateInfo->pColorLoadOps[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_ATTACHMENT_LOAD_OP pCreateInfo->pColorLoadOps[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
if(pCreateInfo->pColorStoreOps == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP* pCreateInfo->pColorStoreOps, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP* pCreateInfo->pColorStoreOps, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_ATTACHMENT_STORE_OP(pCreateInfo->pColorStoreOps[i]))
+ if(!validate_VK_ATTACHMENT_STORE_OP(pCreateInfo->pColorStoreOps[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP pCreateInfo->pColorStoreOps[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP pCreateInfo->pColorStoreOps[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
if(pCreateInfo->pColorLoadClearValues == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_CLEAR_COLOR* pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_CLEAR_COLOR* pCreateInfo->"\
"pColorLoadClearValues, is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pCreateInfo->pColorStoreOps == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP* pCreateInfo->pColorStoreOps, "\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP* pCreateInfo->pColorStoreOps, "\
"is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!validate_XGL_ATTACHMENT_STORE_OP(pCreateInfo->pColorStoreOps[i]))
+ if(!validate_VK_ATTACHMENT_STORE_OP(pCreateInfo->pColorStoreOps[i]))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP pCreateInfo->pColorStoreOps[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP pCreateInfo->pColorStoreOps[" << i <<
"], is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
if(pCreateInfo->pColorLoadClearValues == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_CLEAR_COLOR* pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_CLEAR_COLOR* pCreateInfo->"\
"pColorLoadClearValues, is nullptr (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
for(uint32_t i = 0; i < pCreateInfo->colorAttachmentCount; ++i)
{
- if(!xgl_validate_xgl_clear_color(&(pCreateInfo->pColorLoadClearValues[i])))
+ if(!vk_validate_vk_clear_color(&(pCreateInfo->pColorLoadClearValues[i])))
{
std::stringstream ss;
- ss << "xglCreateRenderPass parameter, XGL_CLEAR_COLOR pCreateInfo->pColorLoadClearValues[" << i <<
+ ss << "vkCreateRenderPass parameter, VK_CLEAR_COLOR pCreateInfo->pColorLoadClearValues[" << i <<
"], is invalid (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", ss.str().c_str());
continue;
}
}
- if(!validate_XGL_FORMAT(pCreateInfo->depthStencilFormat))
+ if(!validate_VK_FORMAT(pCreateInfo->depthStencilFormat))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->"\
"depthStencilFormat, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- XGL_FORMAT_PROPERTIES properties;
+ VK_FORMAT_PROPERTIES properties;
size_t size = sizeof(properties);
- XGL_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->depthStencilFormat,
- XGL_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
- if(result != XGL_SUCCESS)
+ VK_RESULT result = nextTable.GetFormatInfo(device, pCreateInfo->depthStencilFormat,
+ VK_INFO_TYPE_FORMAT_PROPERTIES, &size, &properties);
+ if(result != VK_SUCCESS)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->"\
"depthStencilFormat, cannot be validated (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if((properties.linearTilingFeatures) == 0 && (properties.optimalTilingFeatures == 0))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_FORMAT pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_FORMAT pCreateInfo->"\
"depthStencilFormat, contains unsupported format (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_IMAGE_LAYOUT(pCreateInfo->depthStencilLayout))
+ if(!validate_VK_IMAGE_LAYOUT(pCreateInfo->depthStencilLayout))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_IMAGE_LAYOUT pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_IMAGE_LAYOUT pCreateInfo->"\
"depthStencilLayout, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_ATTACHMENT_LOAD_OP(pCreateInfo->depthLoadOp))
+ if(!validate_VK_ATTACHMENT_LOAD_OP(pCreateInfo->depthLoadOp))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_LOAD_OP pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_LOAD_OP pCreateInfo->"\
"depthLoadOp, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_ATTACHMENT_STORE_OP(pCreateInfo->depthStoreOp))
+ if(!validate_VK_ATTACHMENT_STORE_OP(pCreateInfo->depthStoreOp))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP pCreateInfo->"\
"depthStoreOp, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_ATTACHMENT_LOAD_OP(pCreateInfo->stencilLoadOp))
+ if(!validate_VK_ATTACHMENT_LOAD_OP(pCreateInfo->stencilLoadOp))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_LOAD_OP pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_LOAD_OP pCreateInfo->"\
"stencilLoadOp, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
- if(!validate_XGL_ATTACHMENT_STORE_OP(pCreateInfo->stencilStoreOp))
+ if(!validate_VK_ATTACHMENT_STORE_OP(pCreateInfo->stencilStoreOp))
{
- char const str[] = "xglCreateRenderPass parameter, XGL_ATTACHMENT_STORE_OP pCreateInfo->"\
+ char const str[] = "vkCreateRenderPass parameter, VK_ATTACHMENT_STORE_OP pCreateInfo->"\
"stencilStoreOp, is unrecognized (precondition).";
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-void PostCreateRenderPass(XGL_RESULT result, XGL_RENDER_PASS* pRenderPass)
+void PostCreateRenderPass(VK_RESULT result, VK_RENDER_PASS* pRenderPass)
{
- if(result != XGL_SUCCESS)
+ if(result != VK_SUCCESS)
{
- // TODO: Spit out XGL_RESULT value.
- char const str[] = "xglCreateRenderPass failed (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ // TODO: Spit out VK_RESULT value.
+ char const str[] = "vkCreateRenderPass failed (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
if(pRenderPass == nullptr)
{
- char const str[] = "xglCreateRenderPass parameter, XGL_RENDER_PASS* pRenderPass, is nullptr (postcondition).";
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ char const str[] = "vkCreateRenderPass parameter, VK_RENDER_PASS* pRenderPass, is nullptr (postcondition).";
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
return;
}
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglCreateRenderPass(XGL_DEVICE device, const XGL_RENDER_PASS_CREATE_INFO* pCreateInfo, XGL_RENDER_PASS* pRenderPass)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkCreateRenderPass(VK_DEVICE device, const VK_RENDER_PASS_CREATE_INFO* pCreateInfo, VK_RENDER_PASS* pRenderPass)
{
PreCreateRenderPass(device, pCreateInfo);
- XGL_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
+ VK_RESULT result = nextTable.CreateRenderPass(device, pCreateInfo, pRenderPass);
PostCreateRenderPass(result, pRenderPass);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdBeginRenderPass(XGL_CMD_BUFFER cmdBuffer, const XGL_RENDER_PASS_BEGIN* pRenderPassBegin)
+VK_LAYER_EXPORT void VKAPI vkCmdBeginRenderPass(VK_CMD_BUFFER cmdBuffer, const VK_RENDER_PASS_BEGIN* pRenderPassBegin)
{
char str[1024];
if (!pRenderPassBegin) {
sprintf(str, "Struct ptr parameter pRenderPassBegin to function CmdBeginRenderPass is NULL.");
- layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- else if (!xgl_validate_xgl_render_pass_begin(pRenderPassBegin)) {
+ else if (!vk_validate_vk_render_pass_begin(pRenderPassBegin)) {
sprintf(str, "Parameter pRenderPassBegin to function CmdBeginRenderPass contains an invalid value.");
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
nextTable.CmdBeginRenderPass(cmdBuffer, pRenderPassBegin);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdEndRenderPass(XGL_CMD_BUFFER cmdBuffer, XGL_RENDER_PASS renderPass)
+VK_LAYER_EXPORT void VKAPI vkCmdEndRenderPass(VK_CMD_BUFFER cmdBuffer, VK_RENDER_PASS renderPass)
{
nextTable.CmdEndRenderPass(cmdBuffer, renderPass);
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetValidationLevel(XGL_DEVICE device, XGL_VALIDATION_LEVEL validationLevel)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetValidationLevel(VK_DEVICE device, VK_VALIDATION_LEVEL validationLevel)
{
char str[1024];
- if (!validate_XGL_VALIDATION_LEVEL(validationLevel)) {
+ if (!validate_VK_VALIDATION_LEVEL(validationLevel)) {
sprintf(str, "Parameter validationLevel to function DbgSetValidationLevel has invalid value of %i.", (int)validationLevel);
- layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
+ layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, 1, "PARAMCHECK", str);
}
- XGL_RESULT result = nextTable.DbgSetValidationLevel(device, validationLevel);
+ VK_RESULT result = nextTable.DbgSetValidationLevel(device, validationLevel);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
// This layer intercepts callbacks
- XGL_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));
+ VK_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));
if (!pNewDbgFuncNode)
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;
pNewDbgFuncNode->pUserData = pUserData;
pNewDbgFuncNode->pNext = g_pDbgFunctionHead;
g_pDbgFunctionHead = pNewDbgFuncNode;
// force callbacks if DebugAction hasn't been set already other than initial value
if (g_actionIsDefault) {
- g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;
+ g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;
}
- XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
+ VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
- XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
+ VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;
+ VK_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;
while (pTrav) {
if (pTrav->pfnMsgCallback == pfnMsgCallback) {
pPrev->pNext = pTrav->pNext;
if (g_pDbgFunctionHead == NULL)
{
if (g_actionIsDefault)
- g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;
+ g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;
else
- g_debugAction = (XGL_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)XGL_DBG_LAYER_ACTION_CALLBACK));
+ g_debugAction = (VK_LAYER_DBG_ACTION)(g_debugAction & ~((uint32_t)VK_DBG_LAYER_ACTION_CALLBACK));
}
- XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
+ VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetMessageFilter(XGL_DEVICE device, int32_t msgCode, XGL_DBG_MSG_FILTER filter)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetMessageFilter(VK_DEVICE device, int32_t msgCode, VK_DBG_MSG_FILTER filter)
{
- XGL_RESULT result = nextTable.DbgSetMessageFilter(device, msgCode, filter);
+ VK_RESULT result = nextTable.DbgSetMessageFilter(device, msgCode, filter);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetObjectTag(XGL_BASE_OBJECT object, size_t tagSize, const void* pTag)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetObjectTag(VK_BASE_OBJECT object, size_t tagSize, const void* pTag)
{
- XGL_RESULT result = nextTable.DbgSetObjectTag(object, tagSize, pTag);
+ VK_RESULT result = nextTable.DbgSetObjectTag(object, tagSize, pTag);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
{
- XGL_RESULT result = nextTable.DbgSetGlobalOption(instance, dbgOption, dataSize, pData);
+ VK_RESULT result = nextTable.DbgSetGlobalOption(instance, dbgOption, dataSize, pData);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgSetDeviceOption(XGL_DEVICE device, XGL_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgSetDeviceOption(VK_DEVICE device, VK_DBG_DEVICE_OPTION dbgOption, size_t dataSize, const void* pData)
{
- XGL_RESULT result = nextTable.DbgSetDeviceOption(device, dbgOption, dataSize, pData);
+ VK_RESULT result = nextTable.DbgSetDeviceOption(device, dbgOption, dataSize, pData);
return result;
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerBegin(XGL_CMD_BUFFER cmdBuffer, const char* pMarker)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerBegin(VK_CMD_BUFFER cmdBuffer, const char* pMarker)
{
nextTable.CmdDbgMarkerBegin(cmdBuffer, pMarker);
}
-XGL_LAYER_EXPORT void XGLAPI xglCmdDbgMarkerEnd(XGL_CMD_BUFFER cmdBuffer)
+VK_LAYER_EXPORT void VKAPI vkCmdDbgMarkerEnd(VK_CMD_BUFFER cmdBuffer)
{
nextTable.CmdDbgMarkerEnd(cmdBuffer);
#if defined(__linux__) || defined(XCB_NVIDIA)
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11AssociateConnection(XGL_PHYSICAL_GPU gpu, const XGL_WSI_X11_CONNECTION_INFO* pConnectionInfo)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11AssociateConnection(VK_PHYSICAL_GPU gpu, const VK_WSI_X11_CONNECTION_INFO* pConnectionInfo)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
pCurObj = gpuw;
loader_platform_thread_once(&tabOnce, initParamChecker);
- XGL_RESULT result = nextTable.WsiX11AssociateConnection((XGL_PHYSICAL_GPU)gpuw->nextObject, pConnectionInfo);
+ VK_RESULT result = nextTable.WsiX11AssociateConnection((VK_PHYSICAL_GPU)gpuw->nextObject, pConnectionInfo);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11GetMSC(XGL_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11GetMSC(VK_DEVICE device, xcb_window_t window, xcb_randr_crtc_t crtc, uint64_t* pMsc)
{
- XGL_RESULT result = nextTable.WsiX11GetMSC(device, window, crtc, pMsc);
+ VK_RESULT result = nextTable.WsiX11GetMSC(device, window, crtc, pMsc);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11CreatePresentableImage(XGL_DEVICE device, const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, XGL_IMAGE* pImage, XGL_GPU_MEMORY* pMem)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11CreatePresentableImage(VK_DEVICE device, const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO* pCreateInfo, VK_IMAGE* pImage, VK_GPU_MEMORY* pMem)
{
- XGL_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
+ VK_RESULT result = nextTable.WsiX11CreatePresentableImage(device, pCreateInfo, pImage, pMem);
return result;
}
-XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglWsiX11QueuePresent(XGL_QUEUE queue, const XGL_WSI_X11_PRESENT_INFO* pPresentInfo, XGL_FENCE fence)
+VK_LAYER_EXPORT VK_RESULT VKAPI vkWsiX11QueuePresent(VK_QUEUE queue, const VK_WSI_X11_PRESENT_INFO* pPresentInfo, VK_FENCE fence)
{
- XGL_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
+ VK_RESULT result = nextTable.WsiX11QueuePresent(queue, pPresentInfo, fence);
return result;
}
#endif
-#include "xgl_generic_intercept_proc_helper.h"
-XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)
+#include "vk_generic_intercept_proc_helper.h"
+VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)
{
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
void* addr;
if (gpu == NULL)
return NULL;
else {
if (gpuw->pGPA == NULL)
return NULL;
- return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);
+ return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);
}
}
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -DDEBUG")
if (WIN32)
- set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS -DXCB_NVIDIA")
+ set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -D_CRT_SECURE_NO_WARNINGS -DXCB_NVIDIA")
add_library(XGL SHARED loader.c loader.h dirent_on_windows.c dispatch.c table_ops.h XGL.def)
set_target_properties(XGL PROPERTIES LINK_FLAGS "/DEF:${PROJECT_SOURCE_DIR}/loader/XGL.def")
target_link_libraries(XGL)
endif()
if (NOT WIN32)
- set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DXGL_PROTOTYPES -Wpointer-arith")
+ set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DVK_PROTOTYPES -Wpointer-arith")
add_library(XGL SHARED loader.c dispatch.c table_ops.h)
set_target_properties(XGL PROPERTIES SOVERSION 0)
# Loader Description
## Overview
-The Loader implements the main XGL library (e.g. "XGL.dll" on Windows and
-"libXGL.so" on Linux). It handles layer management and driver management. The
+The Loader implements the main VK library (e.g. "VK.dll" on Windows and
+"libVK.so" on Linux). It handles layer management and driver management. The
loader fully supports multi-gpu operation. As part of this, it dispatches API
calls to the correct driver, and to the correct layers, based on the GPU object
selected by the application.
loader supports layers that operate on multiple GPUs.
## Environment Variables
-**LIBXGL\_DRIVERS\_PATH** directory for loader to search for ICD driver libraries to open
+**LIBVK\_DRIVERS\_PATH** directory for loader to search for ICD driver libraries to open
-**LIBXGL\_LAYERS\_PATH** directory for loader to search for layer libraries that may get activated and used at xglCreateDevice() time.
+**LIBVK\_LAYERS\_PATH** directory for loader to search for layer libraries that may get activated and used at vkCreateDevice() time.
-**LIBXGL\_LAYER\_NAMES** colon-separated list of layer names to be activated (e.g., LIBXGL\_LAYER\_NAMES=MemTracker:DrawState).
+**LIBVK\_LAYER\_NAMES** colon-separated list of layer names to be activated (e.g., LIBVK\_LAYER\_NAMES=MemTracker:DrawState).
-Note: Both of the LIBXGL\_*\_PATH variables may contain more than one directory. Each directory must be separated by one of the following characters, depending on your OS:
+Note: Both of the LIBVK\_*\_PATH variables may contain more than one directory. Each directory must be separated by one of the following characters, depending on your OS:
- ";" on Windows
- ":" on Linux
## Interface to driver (ICD)
-- xglEnumerateGpus exported
-- xglCreateInstance exported
-- xglDestroyInstance exported
-- xglGetProcAddr exported and returns valid function pointers for all the XGL API entrypoints
-- all objects created by ICD can be cast to (XGL\_LAYER\_DISPATCH\_TABLE \*\*)
+- vkEnumerateGpus exported
+- vkCreateInstance exported
+- vkDestroyInstance exported
+- vkGetProcAddr exported and returns valid function pointers for all the VK API entrypoints
+- all objects created by ICD can be cast to (VK\_LAYER\_DISPATCH\_TABLE \*\*)
where the loader will replace the first entry with a pointer to the dispatch table which is
owned by the loader. This implies three things for ICD drivers:
1. The ICD must return a pointer for the opaque object handle
2. This pointer points to a regular C structure with the first entry being a pointer.
- Note: for any C++ ICD's that implement XGL objects directly as C++ classes.
+ Note: for any C++ ICD's that implement VK objects directly as C++ classes.
The C++ compiler may put a vtable at offset zero, if your class is virtual.
In this case use a regular C structure (see below).
3. The reservedForLoader.loaderMagic member must be initialized with ICD\_LOADER\_MAGIC, as follows:
```
- #include "xglIcd.h"
+ #include "vkIcd.h"
struct {
- XGL_LOADER_DATA reservedForLoader; // Reserve space for pointer to loader's dispatch table
+ VK_LOADER_DATA reservedForLoader; // Reserve space for pointer to loader's dispatch table
myObjectClass myObj; // Your driver's C++ class
- } xglObj;
+ } vkObj;
- xglObj alloc_icd_obj()
+ vkObj alloc_icd_obj()
{
- xglObj *newObj = alloc_obj();
+ vkObj *newObj = alloc_obj();
...
// Initialize pointer to loader's dispatch table with ICD_LOADER_MAGIC
set_loader_magic_value(newObj);
Additional Notes:
- The ICD may or may not implement a dispatch table.
-- ICD entrypoints can be named anything including the offcial xgl name such as xglCreateDevice(). However, beware of interposing by dynamic OS library loaders if the offical names are used. On Linux, if offical names are used, the ICD library must be linked with -Bsymbolic.
+- ICD entrypoints can be named anything including the offcial vk name such as vkCreateDevice(). However, beware of interposing by dynamic OS library loaders if the offical names are used. On Linux, if offical names are used, the ICD library must be linked with -Bsymbolic.
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#include "loader_platform.h"
#include "table_ops.h"
#include "loader.h"
-#include "xglIcd.h"
+#include "vkIcd.h"
// The following is #included again to catch certain OS-specific functions
// being used:
#include "loader_platform.h"
struct loader_icd {
const struct loader_scanned_icds *scanned_icds;
- XGL_LAYER_DISPATCH_TABLE *loader_dispatch;
- uint32_t layer_count[XGL_MAX_PHYSICAL_GPUS];
- struct loader_layers layer_libs[XGL_MAX_PHYSICAL_GPUS][MAX_LAYER_LIBRARIES];
- XGL_BASE_LAYER_OBJECT *wrappedGpus[XGL_MAX_PHYSICAL_GPUS];
+ VK_LAYER_DISPATCH_TABLE *loader_dispatch;
+ uint32_t layer_count[VK_MAX_PHYSICAL_GPUS];
+ struct loader_layers layer_libs[VK_MAX_PHYSICAL_GPUS][MAX_LAYER_LIBRARIES];
+ VK_BASE_LAYER_OBJECT *wrappedGpus[VK_MAX_PHYSICAL_GPUS];
uint32_t gpu_count;
- XGL_BASE_LAYER_OBJECT *gpus;
+ VK_BASE_LAYER_OBJECT *gpus;
struct loader_icd *next;
};
struct loader_scanned_icds {
loader_platform_dl_handle handle;
- xglGetProcAddrType GetProcAddr;
- xglCreateInstanceType CreateInstance;
- xglDestroyInstanceType DestroyInstance;
- xglEnumerateGpusType EnumerateGpus;
- xglGetExtensionSupportType GetExtensionSupport;
- XGL_INSTANCE instance;
+ vkGetProcAddrType GetProcAddr;
+ vkCreateInstanceType CreateInstance;
+ vkDestroyInstanceType DestroyInstance;
+ vkEnumerateGpusType EnumerateGpus;
+ vkGetExtensionSupportType GetExtensionSupport;
+ VK_INSTANCE instance;
struct loader_scanned_icds *next;
};
size_t rtn_len;
registry_str = loader_get_registry_string(HKEY_LOCAL_MACHINE,
- "Software\\XGL",
+ "Software\\VK",
registry_value);
registry_len = (registry_str) ? strlen(registry_str) : 0;
#endif // WIN32
-static void loader_log(XGL_DBG_MSG_TYPE msg_type, int32_t msg_code,
+static void loader_log(VK_DBG_MSG_TYPE msg_type, int32_t msg_code,
const char *format, ...)
{
char msg[256];
// Used to call: dlopen(filename, RTLD_LAZY);
handle = loader_platform_open_library(filename);
if (!handle) {
- loader_log(XGL_DBG_MSG_WARNING, 0, loader_platform_open_library_error(filename));
+ loader_log(VK_DBG_MSG_WARNING, 0, loader_platform_open_library_error(filename));
return;
}
#define LOOKUP(func_ptr, func) do { \
- func_ptr = (xgl ##func## Type) loader_platform_get_proc_address(handle, "xgl" #func); \
+ func_ptr = (vk ##func## Type) loader_platform_get_proc_address(handle, "vk" #func); \
if (!func_ptr) { \
- loader_log(XGL_DBG_MSG_WARNING, 0, loader_platform_get_proc_address_error("xgl" #func)); \
+ loader_log(VK_DBG_MSG_WARNING, 0, loader_platform_get_proc_address_error("vk" #func)); \
return; \
} \
} while (0)
new_node = (struct loader_scanned_icds *) malloc(sizeof(struct loader_scanned_icds));
if (!new_node) {
- loader_log(XGL_DBG_MSG_WARNING, 0, "Out of memory can't add icd");
+ loader_log(VK_DBG_MSG_WARNING, 0, "Out of memory can't add icd");
return;
}
/**
- * Try to \c loader_icd_scan XGL driver(s).
+ * Try to \c loader_icd_scan VK driver(s).
*
* This function scans the default system path or path
- * specified by the \c LIBXGL_DRIVERS_PATH environment variable in
- * order to find loadable XGL ICDs with the name of libXGL_*.
+ * specified by the \c LIBVK_DRIVERS_PATH environment variable in
+ * order to find loadable VK ICDs with the name of libVK_*.
*
* \returns
* void; but side effect is to set loader_icd_scanned to true
must_free_libPaths = true;
} else {
must_free_libPaths = false;
- libPaths = DEFAULT_XGL_DRIVERS_PATH;
+ libPaths = DEFAULT_VK_DRIVERS_PATH;
}
#else // WIN32
if (geteuid() == getuid()) {
libPaths = getenv(DRIVER_PATH_ENV);
}
if (libPaths == NULL) {
- libPaths = DEFAULT_XGL_DRIVERS_PATH;
+ libPaths = DEFAULT_VK_DRIVERS_PATH;
}
#endif // WIN32
if (sysdir) {
dent = readdir(sysdir);
while (dent) {
- /* Look for ICDs starting with XGL_DRIVER_LIBRARY_PREFIX and
- * ending with XGL_LIBRARY_SUFFIX
+ /* Look for ICDs starting with VK_DRIVER_LIBRARY_PREFIX and
+ * ending with VK_LIBRARY_SUFFIX
*/
if (!strncmp(dent->d_name,
- XGL_DRIVER_LIBRARY_PREFIX,
- XGL_DRIVER_LIBRARY_PREFIX_LEN)) {
+ VK_DRIVER_LIBRARY_PREFIX,
+ VK_DRIVER_LIBRARY_PREFIX_LEN)) {
uint32_t nlen = (uint32_t) strlen(dent->d_name);
- const char *suf = dent->d_name + nlen - XGL_LIBRARY_SUFFIX_LEN;
- if ((nlen > XGL_LIBRARY_SUFFIX_LEN) &&
+ const char *suf = dent->d_name + nlen - VK_LIBRARY_SUFFIX_LEN;
+ if ((nlen > VK_LIBRARY_SUFFIX_LEN) &&
!strncmp(suf,
- XGL_LIBRARY_SUFFIX,
- XGL_LIBRARY_SUFFIX_LEN)) {
+ VK_LIBRARY_SUFFIX,
+ VK_LIBRARY_SUFFIX_LEN)) {
snprintf(icd_library, 1024, "%s" DIRECTORY_SYMBOL "%s", p,dent->d_name);
loader_scanned_icd_add(icd_library);
}
must_free_libPaths = true;
} else {
must_free_libPaths = false;
- libPaths = DEFAULT_XGL_LAYERS_PATH;
+ libPaths = DEFAULT_VK_LAYERS_PATH;
}
#else // WIN32
if (geteuid() == getuid()) {
libPaths = getenv(LAYERS_PATH_ENV);
}
if (libPaths == NULL) {
- libPaths = DEFAULT_XGL_LAYERS_PATH;
+ libPaths = DEFAULT_VK_LAYERS_PATH;
}
#endif // WIN32
if (curdir) {
dent = readdir(curdir);
while (dent) {
- /* Look for layers starting with XGL_LAYER_LIBRARY_PREFIX and
- * ending with XGL_LIBRARY_SUFFIX
+ /* Look for layers starting with VK_LAYER_LIBRARY_PREFIX and
+ * ending with VK_LIBRARY_SUFFIX
*/
if (!strncmp(dent->d_name,
- XGL_LAYER_LIBRARY_PREFIX,
- XGL_LAYER_LIBRARY_PREFIX_LEN)) {
+ VK_LAYER_LIBRARY_PREFIX,
+ VK_LAYER_LIBRARY_PREFIX_LEN)) {
uint32_t nlen = (uint32_t) strlen(dent->d_name);
- const char *suf = dent->d_name + nlen - XGL_LIBRARY_SUFFIX_LEN;
- if ((nlen > XGL_LIBRARY_SUFFIX_LEN) &&
+ const char *suf = dent->d_name + nlen - VK_LIBRARY_SUFFIX_LEN;
+ if ((nlen > VK_LIBRARY_SUFFIX_LEN) &&
!strncmp(suf,
- XGL_LIBRARY_SUFFIX,
- XGL_LIBRARY_SUFFIX_LEN)) {
+ VK_LIBRARY_SUFFIX,
+ VK_LIBRARY_SUFFIX_LEN)) {
loader_platform_dl_handle handle;
snprintf(temp_str, sizeof(temp_str), "%s" DIRECTORY_SYMBOL "%s",p,dent->d_name);
// Used to call: dlopen(temp_str, RTLD_LAZY)
continue;
}
if (loader.scanned_layer_count == MAX_LAYER_LIBRARIES) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "%s ignored: max layer libraries exceed", temp_str);
+ loader_log(VK_DBG_MSG_ERROR, 0, "%s ignored: max layer libraries exceed", temp_str);
break;
}
if ((loader.scanned_layer_names[loader.scanned_layer_count] = malloc(strlen(temp_str) + 1)) == NULL) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "%s ignored: out of memory", temp_str);
+ loader_log(VK_DBG_MSG_ERROR, 0, "%s ignored: out of memory", temp_str);
break;
}
strcpy(loader.scanned_layer_names[loader.scanned_layer_count], temp_str);
loader.layer_scanned = true;
}
-static void loader_init_dispatch_table(XGL_LAYER_DISPATCH_TABLE *tab, xglGetProcAddrType fpGPA, XGL_PHYSICAL_GPU gpu)
+static void loader_init_dispatch_table(VK_LAYER_DISPATCH_TABLE *tab, vkGetProcAddrType fpGPA, VK_PHYSICAL_GPU gpu)
{
loader_initialize_dispatch_table(tab, fpGPA, gpu);
if (tab->EnumerateLayers == NULL)
- tab->EnumerateLayers = xglEnumerateLayers;
+ tab->EnumerateLayers = vkEnumerateLayers;
}
-static struct loader_icd * loader_get_icd(const XGL_BASE_LAYER_OBJECT *gpu, uint32_t *gpu_index)
+static struct loader_icd * loader_get_icd(const VK_BASE_LAYER_OBJECT *gpu, uint32_t *gpu_index)
{
for (struct loader_instance *inst = loader.instances; inst; inst = inst->next) {
for (struct loader_icd *icd = inst->icds; icd; icd = icd->next) {
obj->name[sizeof(obj->name) - 1] = '\0';
// Used to call: dlopen(pLayerNames[i].lib_name, RTLD_LAZY | RTLD_DEEPBIND)
if ((obj->lib_handle = loader_platform_open_library(pLayerNames[i].lib_name)) == NULL) {
- loader_log(XGL_DBG_MSG_ERROR, 0, loader_platform_open_library_error(pLayerNames[i].lib_name));
+ loader_log(VK_DBG_MSG_ERROR, 0, loader_platform_open_library_error(pLayerNames[i].lib_name));
continue;
} else {
- loader_log(XGL_DBG_MSG_UNKNOWN, 0, "Inserting layer %s from library %s", pLayerNames[i].layer_name, pLayerNames[i].lib_name);
+ loader_log(VK_DBG_MSG_UNKNOWN, 0, "Inserting layer %s from library %s", pLayerNames[i].layer_name, pLayerNames[i].lib_name);
}
free(pLayerNames[i].layer_name);
icd->layer_count[gpu_index]++;
}
}
-static XGL_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_index, const char *pExtName, const char **lib_name)
+static VK_RESULT find_layer_extension(struct loader_icd *icd, uint32_t gpu_index, const char *pExtName, const char **lib_name)
{
- XGL_RESULT err;
+ VK_RESULT err;
char *search_name;
loader_platform_dl_handle handle;
- xglGetExtensionSupportType fpGetExtensionSupport;
+ vkGetExtensionSupportType fpGetExtensionSupport;
/*
* The loader provides the abstraction that make layers and extensions work via
* the currently defined extension mechanism. That is, when app queries for an extension
- * via xglGetExtensionSupport, the loader will call both the driver as well as any layers
+ * via vkGetExtensionSupport, the loader will call both the driver as well as any layers
* to see who implements that extension. Then, if the app enables the extension during
- * xglCreateDevice the loader will find and load any layers that implement that extension.
+ * vkCreateDevice the loader will find and load any layers that implement that extension.
*/
// TODO: What if extension is in multiple places?
// TODO: Who should we ask first? Driver or layers? Do driver for now.
- err = icd->scanned_icds[gpu_index].GetExtensionSupport((XGL_PHYSICAL_GPU) (icd->gpus[gpu_index].nextObject), pExtName);
- if (err == XGL_SUCCESS) {
+ err = icd->scanned_icds[gpu_index].GetExtensionSupport((VK_PHYSICAL_GPU) (icd->gpus[gpu_index].nextObject), pExtName);
+ if (err == VK_SUCCESS) {
if (lib_name) {
*lib_name = NULL;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
for (unsigned int j = 0; j < loader.scanned_layer_count; j++) {
if ((handle = loader_platform_open_library(search_name)) == NULL)
continue;
- fpGetExtensionSupport = loader_platform_get_proc_address(handle, "xglGetExtensionSupport");
+ fpGetExtensionSupport = loader_platform_get_proc_address(handle, "vkGetExtensionSupport");
if (fpGetExtensionSupport != NULL) {
// Found layer's GetExtensionSupport call
- err = fpGetExtensionSupport((XGL_PHYSICAL_GPU) (icd->gpus + gpu_index), pExtName);
+ err = fpGetExtensionSupport((VK_PHYSICAL_GPU) (icd->gpus + gpu_index), pExtName);
loader_platform_close_library(handle);
- if (err == XGL_SUCCESS) {
+ if (err == VK_SUCCESS) {
if (lib_name) {
*lib_name = loader.scanned_layer_names[j];
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
} else {
loader_platform_close_library(handle);
// No GetExtensionSupport or GetExtensionSupport returned invalid extension
// for the layer, so test the layer name as if it is an extension name
- // use default layer name based on library name XGL_LAYER_LIBRARY_PREFIX<name>.XGL_LIBRARY_SUFFIX
+ // use default layer name based on library name VK_LAYER_LIBRARY_PREFIX<name>.VK_LIBRARY_SUFFIX
char *pEnd;
size_t siz;
search_name = basename(search_name);
- search_name += strlen(XGL_LAYER_LIBRARY_PREFIX);
+ search_name += strlen(VK_LAYER_LIBRARY_PREFIX);
pEnd = strrchr(search_name, '.');
siz = (int) (pEnd - search_name);
if (siz != strlen(pExtName))
if (lib_name) {
*lib_name = loader.scanned_layer_names[j];
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
}
- return XGL_ERROR_INVALID_EXTENSION;
+ return VK_ERROR_INVALID_EXTENSION;
}
static uint32_t loader_get_layer_env(struct loader_icd *icd, uint32_t gpu_index, struct layer_name_pair *pLayerNames)
next++;
}
name = basename(p);
- if (find_layer_extension(icd, gpu_index, name, &lib_name) != XGL_SUCCESS) {
+ if (find_layer_extension(icd, gpu_index, name, &lib_name) != VK_SUCCESS) {
p = next;
continue;
}
return count;
}
-static uint32_t loader_get_layer_libs(struct loader_icd *icd, uint32_t gpu_index, const XGL_DEVICE_CREATE_INFO* pCreateInfo, struct layer_name_pair **ppLayerNames)
+static uint32_t loader_get_layer_libs(struct loader_icd *icd, uint32_t gpu_index, const VK_DEVICE_CREATE_INFO* pCreateInfo, struct layer_name_pair **ppLayerNames)
{
static struct layer_name_pair layerNames[MAX_LAYER_LIBRARIES];
const char *lib_name = NULL;
for (uint32_t i = 0; i < pCreateInfo->extensionCount; i++) {
const char *pExtName = pCreateInfo->ppEnabledExtensionNames[i];
- if (find_layer_extension(icd, gpu_index, pExtName, &lib_name) == XGL_SUCCESS) {
+ if (find_layer_extension(icd, gpu_index, pExtName, &lib_name) == VK_SUCCESS) {
uint32_t len;
/*
}
}
-extern uint32_t loader_activate_layers(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo)
+extern uint32_t loader_activate_layers(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo)
{
uint32_t gpu_index;
uint32_t count;
struct layer_name_pair *pLayerNames;
- struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
+ struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
if (!icd)
return 0;
- assert(gpu_index < XGL_MAX_PHYSICAL_GPUS);
+ assert(gpu_index < VK_MAX_PHYSICAL_GPUS);
/* activate any layer libraries */
if (!loader_layers_activated(icd, gpu_index)) {
- XGL_BASE_LAYER_OBJECT *gpuObj = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_BASE_LAYER_OBJECT *nextGpuObj, *baseObj = gpuObj->baseObject;
- xglGetProcAddrType nextGPA = xglGetProcAddr;
+ VK_BASE_LAYER_OBJECT *gpuObj = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_BASE_LAYER_OBJECT *nextGpuObj, *baseObj = gpuObj->baseObject;
+ vkGetProcAddrType nextGPA = vkGetProcAddr;
count = loader_get_layer_libs(icd, gpu_index, pCreateInfo, &pLayerNames);
if (!count)
return 0;
loader_init_layer_libs(icd, gpu_index, pLayerNames, count);
- icd->wrappedGpus[gpu_index] = malloc(sizeof(XGL_BASE_LAYER_OBJECT) * icd->layer_count[gpu_index]);
+ icd->wrappedGpus[gpu_index] = malloc(sizeof(VK_BASE_LAYER_OBJECT) * icd->layer_count[gpu_index]);
if (! icd->wrappedGpus[gpu_index])
- loader_log(XGL_DBG_MSG_ERROR, 0, "Failed to malloc Gpu objects for layer");
+ loader_log(VK_DBG_MSG_ERROR, 0, "Failed to malloc Gpu objects for layer");
for (int32_t i = icd->layer_count[gpu_index] - 1; i >= 0; i--) {
nextGpuObj = (icd->wrappedGpus[gpu_index] + i);
nextGpuObj->pGPA = nextGPA;
char funcStr[256];
snprintf(funcStr, 256, "%sGetProcAddr",icd->layer_libs[gpu_index][i].name);
- if ((nextGPA = (xglGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, funcStr)) == NULL)
- nextGPA = (xglGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, "xglGetProcAddr");
+ if ((nextGPA = (vkGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, funcStr)) == NULL)
+ nextGPA = (vkGetProcAddrType) loader_platform_get_proc_address(icd->layer_libs[gpu_index][i].lib_handle, "vkGetProcAddr");
if (!nextGPA) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "Failed to find xglGetProcAddr in layer %s", icd->layer_libs[gpu_index][i].name);
+ loader_log(VK_DBG_MSG_ERROR, 0, "Failed to find vkGetProcAddr in layer %s", icd->layer_libs[gpu_index][i].name);
continue;
}
if (i == 0) {
loader_init_dispatch_table(icd->loader_dispatch + gpu_index, nextGPA, gpuObj);
//Insert the new wrapped objects into the list with loader object at head
- ((XGL_BASE_LAYER_OBJECT *) gpu)->nextObject = gpuObj;
- ((XGL_BASE_LAYER_OBJECT *) gpu)->pGPA = nextGPA;
+ ((VK_BASE_LAYER_OBJECT *) gpu)->nextObject = gpuObj;
+ ((VK_BASE_LAYER_OBJECT *) gpu)->pGPA = nextGPA;
gpuObj = icd->wrappedGpus[gpu_index] + icd->layer_count[gpu_index] - 1;
gpuObj->nextObject = baseObj;
gpuObj->pGPA = icd->scanned_icds->GetProcAddr;
count = loader_get_layer_libs(icd, gpu_index, pCreateInfo, &pLayerNames);
for (uint32_t i = 0; i < count; i++) {
if (strcmp(icd->layer_libs[gpu_index][i].name, pLayerNames[i].layer_name)) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "Layers activated != Layers requested");
+ loader_log(VK_DBG_MSG_ERROR, 0, "Layers activated != Layers requested");
break;
}
}
if (count != icd->layer_count[gpu_index]) {
- loader_log(XGL_DBG_MSG_ERROR, 0, "Number of Layers activated != number requested");
+ loader_log(VK_DBG_MSG_ERROR, 0, "Number of Layers activated != number requested");
}
}
return icd->layer_count[gpu_index];
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglCreateInstance(
- const XGL_INSTANCE_CREATE_INFO* pCreateInfo,
- XGL_INSTANCE* pInstance)
+LOADER_EXPORT VK_RESULT VKAPI vkCreateInstance(
+ const VK_INSTANCE_CREATE_INFO* pCreateInfo,
+ VK_INSTANCE* pInstance)
{
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(once_icd);
static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(once_layer);
struct loader_instance *ptr_instance = NULL;
struct loader_scanned_icds *scanned_icds;
struct loader_icd *icd;
- XGL_RESULT res = XGL_ERROR_INITIALIZATION_FAILED;
+ VK_RESULT res = VK_ERROR_INITIALIZATION_FAILED;
/* Scan/discover all ICD libraries in a single-threaded manner */
loader_platform_thread_once(&once_icd, loader_icd_scan);
ptr_instance = (struct loader_instance*) malloc(sizeof(struct loader_instance));
if (ptr_instance == NULL) {
- return XGL_ERROR_OUT_OF_MEMORY;
+ return VK_ERROR_OUT_OF_MEMORY;
}
memset(ptr_instance, 0, sizeof(struct loader_instance));
if (icd) {
res = scanned_icds->CreateInstance(pCreateInfo,
&(scanned_icds->instance));
- if (res != XGL_SUCCESS)
+ if (res != VK_SUCCESS)
{
ptr_instance->icds = ptr_instance->icds->next;
loader_icd_destroy(icd);
scanned_icds->instance = NULL;
- loader_log(XGL_DBG_MSG_WARNING, 0,
+ loader_log(VK_DBG_MSG_WARNING, 0,
"ICD ignored: failed to CreateInstance on device");
}
}
}
if (ptr_instance->icds == NULL) {
- return XGL_ERROR_INCOMPATIBLE_DRIVER;
+ return VK_ERROR_INCOMPATIBLE_DRIVER;
}
- *pInstance = (XGL_INSTANCE) ptr_instance;
- return XGL_SUCCESS;
+ *pInstance = (VK_INSTANCE) ptr_instance;
+ return VK_SUCCESS;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglDestroyInstance(
- XGL_INSTANCE instance)
+LOADER_EXPORT VK_RESULT VKAPI vkDestroyInstance(
+ VK_INSTANCE instance)
{
struct loader_instance *ptr_instance = (struct loader_instance *) instance;
struct loader_scanned_icds *scanned_icds;
- XGL_RESULT res;
+ VK_RESULT res;
// Remove this instance from the list of instances:
struct loader_instance *prev = NULL;
}
if (next == NULL) {
// This must be an invalid instance handle or empty list
- return XGL_ERROR_INVALID_HANDLE;
+ return VK_ERROR_INVALID_HANDLE;
}
// cleanup any prior layer initializations
while (scanned_icds) {
if (scanned_icds->instance)
res = scanned_icds->DestroyInstance(scanned_icds->instance);
- if (res != XGL_SUCCESS)
- loader_log(XGL_DBG_MSG_WARNING, 0,
+ if (res != VK_SUCCESS)
+ loader_log(VK_DBG_MSG_WARNING, 0,
"ICD ignored: failed to DestroyInstance on device");
scanned_icds->instance = NULL;
scanned_icds = scanned_icds->next;
free(ptr_instance);
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateGpus(
+LOADER_EXPORT VK_RESULT VKAPI vkEnumerateGpus(
- XGL_INSTANCE instance,
+ VK_INSTANCE instance,
uint32_t maxGpus,
uint32_t* pGpuCount,
- XGL_PHYSICAL_GPU* pGpus)
+ VK_PHYSICAL_GPU* pGpus)
{
struct loader_instance *ptr_instance = (struct loader_instance *) instance;
struct loader_icd *icd;
uint32_t count = 0;
- XGL_RESULT res;
+ VK_RESULT res;
- //in spirit of XGL don't error check on the instance parameter
+ //in spirit of VK don't error check on the instance parameter
icd = ptr_instance->icds;
while (icd) {
- XGL_PHYSICAL_GPU gpus[XGL_MAX_PHYSICAL_GPUS];
- XGL_BASE_LAYER_OBJECT * wrapped_gpus;
- xglGetProcAddrType get_proc_addr = icd->scanned_icds->GetProcAddr;
+ VK_PHYSICAL_GPU gpus[VK_MAX_PHYSICAL_GPUS];
+ VK_BASE_LAYER_OBJECT * wrapped_gpus;
+ vkGetProcAddrType get_proc_addr = icd->scanned_icds->GetProcAddr;
uint32_t n, max = maxGpus - count;
- if (max > XGL_MAX_PHYSICAL_GPUS) {
- max = XGL_MAX_PHYSICAL_GPUS;
+ if (max > VK_MAX_PHYSICAL_GPUS) {
+ max = VK_MAX_PHYSICAL_GPUS;
}
res = icd->scanned_icds->EnumerateGpus(icd->scanned_icds->instance,
max, &n,
gpus);
- if (res == XGL_SUCCESS && n) {
- wrapped_gpus = (XGL_BASE_LAYER_OBJECT*) malloc(n *
- sizeof(XGL_BASE_LAYER_OBJECT));
+ if (res == VK_SUCCESS && n) {
+ wrapped_gpus = (VK_BASE_LAYER_OBJECT*) malloc(n *
+ sizeof(VK_BASE_LAYER_OBJECT));
icd->gpus = wrapped_gpus;
icd->gpu_count = n;
- icd->loader_dispatch = (XGL_LAYER_DISPATCH_TABLE *) malloc(n *
- sizeof(XGL_LAYER_DISPATCH_TABLE));
+ icd->loader_dispatch = (VK_LAYER_DISPATCH_TABLE *) malloc(n *
+ sizeof(VK_LAYER_DISPATCH_TABLE));
for (unsigned int i = 0; i < n; i++) {
(wrapped_gpus + i)->baseObject = gpus[i];
(wrapped_gpus + i)->pGPA = get_proc_addr;
/* Verify ICD compatibility */
if (!valid_loader_magic_value(gpus[i])) {
- loader_log(XGL_DBG_MSG_WARNING, 0,
+ loader_log(VK_DBG_MSG_WARNING, 0,
"Loader: Incompatible ICD, first dword must be initialized to ICD_LOADER_MAGIC. See loader/README.md for details.\n");
assert(0);
}
- const XGL_LAYER_DISPATCH_TABLE **disp;
- disp = (const XGL_LAYER_DISPATCH_TABLE **) gpus[i];
+ const VK_LAYER_DISPATCH_TABLE **disp;
+ disp = (const VK_LAYER_DISPATCH_TABLE **) gpus[i];
*disp = icd->loader_dispatch + i;
}
*pGpuCount = count;
- return (count > 0) ? XGL_SUCCESS : res;
+ return (count > 0) ? VK_SUCCESS : res;
}
-LOADER_EXPORT void * XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char * pName)
+LOADER_EXPORT void * VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char * pName)
{
if (gpu == NULL) {
return NULL;
}
- XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;
- XGL_LAYER_DISPATCH_TABLE * disp_table = * (XGL_LAYER_DISPATCH_TABLE **) gpuw->baseObject;
+ VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;
+ VK_LAYER_DISPATCH_TABLE * disp_table = * (VK_LAYER_DISPATCH_TABLE **) gpuw->baseObject;
void *addr;
if (disp_table == NULL)
}
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char *pExtName)
+LOADER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char *pExtName)
{
uint32_t gpu_index;
- struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
+ struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
if (!icd)
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
return find_layer_extension(icd, gpu_index, pExtName, NULL);
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglEnumerateLayers(XGL_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
+LOADER_EXPORT VK_RESULT VKAPI vkEnumerateLayers(VK_PHYSICAL_GPU gpu, size_t maxLayerCount, size_t maxStringSize, size_t* pOutLayerCount, char* const* pOutLayers, void* pReserved)
{
uint32_t gpu_index;
size_t count = 0;
char *lib_name;
- struct loader_icd *icd = loader_get_icd((const XGL_BASE_LAYER_OBJECT *) gpu, &gpu_index);
+ struct loader_icd *icd = loader_get_icd((const VK_BASE_LAYER_OBJECT *) gpu, &gpu_index);
loader_platform_dl_handle handle;
- xglEnumerateLayersType fpEnumerateLayers;
+ vkEnumerateLayersType fpEnumerateLayers;
char layer_buf[16][256];
char * layers[16];
if (pOutLayerCount == NULL || pOutLayers == NULL)
- return XGL_ERROR_INVALID_POINTER;
+ return VK_ERROR_INVALID_POINTER;
if (!icd)
- return XGL_ERROR_UNAVAILABLE;
+ return VK_ERROR_UNAVAILABLE;
for (int i = 0; i < 16; i++)
layers[i] = &layer_buf[i][0];
// Used to call: dlopen(*lib_name, RTLD_LAZY)
if ((handle = loader_platform_open_library(lib_name)) == NULL)
continue;
- if ((fpEnumerateLayers = loader_platform_get_proc_address(handle, "xglEnumerateLayers")) == NULL) {
- //use default layer name based on library name XGL_LAYER_LIBRARY_PREFIX<name>.XGL_LIBRARY_SUFFIX
+ if ((fpEnumerateLayers = loader_platform_get_proc_address(handle, "vkEnumerateLayers")) == NULL) {
+ //use default layer name based on library name VK_LAYER_LIBRARY_PREFIX<name>.VK_LIBRARY_SUFFIX
char *pEnd, *cpyStr;
size_t siz;
loader_platform_close_library(handle);
lib_name = basename(lib_name);
pEnd = strrchr(lib_name, '.');
- siz = (int) (pEnd - lib_name - strlen(XGL_LAYER_LIBRARY_PREFIX) + 1);
+ siz = (int) (pEnd - lib_name - strlen(VK_LAYER_LIBRARY_PREFIX) + 1);
if (pEnd == NULL || siz <= 0)
continue;
cpyStr = malloc(siz);
free(cpyStr);
continue;
}
- strncpy(cpyStr, lib_name + strlen(XGL_LAYER_LIBRARY_PREFIX), siz);
+ strncpy(cpyStr, lib_name + strlen(VK_LAYER_LIBRARY_PREFIX), siz);
cpyStr[siz - 1] = '\0';
if (siz > maxStringSize)
siz = (int) maxStringSize;
} else {
size_t cnt;
uint32_t n;
- XGL_RESULT res;
+ VK_RESULT res;
n = (uint32_t) ((maxStringSize < 256) ? maxStringSize : 256);
res = fpEnumerateLayers(NULL, 16, n, &cnt, layers, (char *) icd->gpus + gpu_index);
loader_platform_close_library(handle);
- if (res != XGL_SUCCESS)
+ if (res != VK_SUCCESS)
continue;
if (cnt + count > maxLayerCount)
cnt = maxLayerCount - count;
*pOutLayerCount = count;
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
+LOADER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)
{
const struct loader_icd *icd;
struct loader_instance *inst;
- XGL_RESULT res;
+ VK_RESULT res;
uint32_t gpu_idx;
- if (instance == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (instance == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
break;
}
- if (inst == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (inst == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
for (icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
res = (icd->loader_dispatch + i)->DbgRegisterMsgCallback(icd->scanned_icds->instance,
pfnMsgCallback, pUserData);
- if (res != XGL_SUCCESS) {
+ if (res != VK_SUCCESS) {
gpu_idx = i;
break;
}
}
- if (res != XGL_SUCCESS)
+ if (res != VK_SUCCESS)
break;
}
return res;
}
- return XGL_SUCCESS;
+ return VK_SUCCESS;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
+LOADER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)
{
- XGL_RESULT res = XGL_SUCCESS;
+ VK_RESULT res = VK_SUCCESS;
struct loader_instance *inst;
- if (instance == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (instance == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
break;
}
- if (inst == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (inst == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
for (const struct loader_icd * icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
- XGL_RESULT r;
+ VK_RESULT r;
r = (icd->loader_dispatch + i)->DbgUnregisterMsgCallback(icd->scanned_icds->instance, pfnMsgCallback);
- if (r != XGL_SUCCESS) {
+ if (r != VK_SUCCESS) {
res = r;
}
}
return res;
}
-LOADER_EXPORT XGL_RESULT XGLAPI xglDbgSetGlobalOption(XGL_INSTANCE instance, XGL_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
+LOADER_EXPORT VK_RESULT VKAPI vkDbgSetGlobalOption(VK_INSTANCE instance, VK_DBG_GLOBAL_OPTION dbgOption, size_t dataSize, const void* pData)
{
- XGL_RESULT res = XGL_SUCCESS;
+ VK_RESULT res = VK_SUCCESS;
struct loader_instance *inst;
- if (instance == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (instance == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
assert(loader.icds_scanned);
break;
}
- if (inst == XGL_NULL_HANDLE)
- return XGL_ERROR_INVALID_HANDLE;
+ if (inst == VK_NULL_HANDLE)
+ return VK_ERROR_INVALID_HANDLE;
for (const struct loader_icd * icd = inst->icds; icd; icd = icd->next) {
for (uint32_t i = 0; i < icd->gpu_count; i++) {
- XGL_RESULT r;
+ VK_RESULT r;
r = (icd->loader_dispatch + i)->DbgSetGlobalOption(icd->scanned_icds->instance, dbgOption,
dataSize, pData);
/* unfortunately we cannot roll back */
- if (r != XGL_SUCCESS) {
+ if (r != VK_SUCCESS) {
res = r;
}
}
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
#ifndef LOADER_H
#define LOADER_H
-#include <xgl.h>
-#include <xglDbg.h>
+#include <vulkan.h>
+#include <vkDbg.h>
#if defined(WIN32)
// FIXME: NEED WINDOWS EQUIVALENT
#else // WIN32
-#include <xglWsiX11Ext.h>
+#include <vkWsiX11Ext.h>
#endif // WIN32
-#include <xglLayer.h>
-#include <xglIcd.h>
+#include <vkLayer.h>
+#include <vkIcd.h>
#include <assert.h>
#if defined(__GNUC__) && __GNUC__ >= 4
loader_set_data(obj, data);
}
-static inline void *loader_unwrap_gpu(XGL_PHYSICAL_GPU *gpu)
+static inline void *loader_unwrap_gpu(VK_PHYSICAL_GPU *gpu)
{
- const XGL_BASE_LAYER_OBJECT *wrap = (const XGL_BASE_LAYER_OBJECT *) *gpu;
+ const VK_BASE_LAYER_OBJECT *wrap = (const VK_BASE_LAYER_OBJECT *) *gpu;
- *gpu = (XGL_PHYSICAL_GPU) wrap->nextObject;
+ *gpu = (VK_PHYSICAL_GPU) wrap->nextObject;
return loader_get_data(wrap->baseObject);
}
-extern uint32_t loader_activate_layers(XGL_PHYSICAL_GPU gpu, const XGL_DEVICE_CREATE_INFO* pCreateInfo);
+extern uint32_t loader_activate_layers(VK_PHYSICAL_GPU gpu, const VK_DEVICE_CREATE_INFO* pCreateInfo);
#define MAX_LAYER_LIBRARIES 64
#endif /* LOADER_H */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2015 LunarG, Inc.
* Copyright 2014 Valve Software
#include <pthread.h>
#include <assert.h>
-// XGL Library Filenames, Paths, etc.:
+// VK Library Filenames, Paths, etc.:
#define PATH_SEPERATOR ':'
#define DIRECTORY_SYMBOL "/"
-#define DRIVER_PATH_ENV "LIBXGL_DRIVERS_PATH"
-#define LAYERS_PATH_ENV "LIBXGL_LAYERS_PATH"
-#define LAYER_NAMES_ENV "LIBXGL_LAYER_NAMES"
-#ifndef DEFAULT_XGL_DRIVERS_PATH
+#define DRIVER_PATH_ENV "LIBVK_DRIVERS_PATH"
+#define LAYERS_PATH_ENV "LIBVK_LAYERS_PATH"
+#define LAYER_NAMES_ENV "LIBVK_LAYER_NAMES"
+#ifndef DEFAULT_VK_DRIVERS_PATH
// TODO: Is this a good default location?
// Need to search for both 32bit and 64bit ICDs
-#define DEFAULT_XGL_DRIVERS_PATH "/usr/lib/i386-linux-gnu/xgl:/usr/lib/x86_64-linux-gnu/xgl"
-#define XGL_DRIVER_LIBRARY_PREFIX "libXGL_"
-#define XGL_DRIVER_LIBRARY_PREFIX_LEN 7
-#define XGL_LAYER_LIBRARY_PREFIX "libXGLLayer"
-#define XGL_LAYER_LIBRARY_PREFIX_LEN 11
-#define XGL_LIBRARY_SUFFIX ".so"
-#define XGL_LIBRARY_SUFFIX_LEN 3
-#endif // DEFAULT_XGL_DRIVERS_PATH
-#ifndef DEFAULT_XGL_LAYERS_PATH
+#define DEFAULT_VK_DRIVERS_PATH "/usr/lib/i386-linux-gnu/vk:/usr/lib/x86_64-linux-gnu/vk"
+#define VK_DRIVER_LIBRARY_PREFIX "libVK_"
+#define VK_DRIVER_LIBRARY_PREFIX_LEN 6
+#define VK_LAYER_LIBRARY_PREFIX "libVKLayer"
+#define VK_LAYER_LIBRARY_PREFIX_LEN 10
+#define VK_LIBRARY_SUFFIX ".so"
+#define VK_LIBRARY_SUFFIX_LEN 3
+#endif // DEFAULT_VK_DRIVERS_PATH
+#ifndef DEFAULT_VK_LAYERS_PATH
// TODO: Are these good default locations?
-#define DEFAULT_XGL_LAYERS_PATH ".:/usr/lib/i386-linux-gnu/xgl:/usr/lib/x86_64-linux-gnu/xgl"
+#define DEFAULT_VK_LAYERS_PATH ".:/usr/lib/i386-linux-gnu/vk:/usr/lib/x86_64-linux-gnu/vk"
#endif
// C99:
using namespace std;
#endif // __cplusplus
-// XGL Library Filenames, Paths, etc.:
+// VK Library Filenames, Paths, etc.:
#define PATH_SEPERATOR ';'
#define DIRECTORY_SYMBOL "\\"
-#define DRIVER_PATH_REGISTRY_VALUE "XGL_DRIVERS_PATH"
-#define LAYERS_PATH_REGISTRY_VALUE "XGL_LAYERS_PATH"
-#define LAYER_NAMES_REGISTRY_VALUE "XGL_LAYER_NAMES"
-#define DRIVER_PATH_ENV "XGL_DRIVERS_PATH"
-#define LAYERS_PATH_ENV "XGL_LAYERS_PATH"
-#define LAYER_NAMES_ENV "XGL_LAYER_NAMES"
-#ifndef DEFAULT_XGL_DRIVERS_PATH
+#define DRIVER_PATH_REGISTRY_VALUE "VK_DRIVERS_PATH"
+#define LAYERS_PATH_REGISTRY_VALUE "VK_LAYERS_PATH"
+#define LAYER_NAMES_REGISTRY_VALUE "VK_LAYER_NAMES"
+#define DRIVER_PATH_ENV "VK_DRIVERS_PATH"
+#define LAYERS_PATH_ENV "VK_LAYERS_PATH"
+#define LAYER_NAMES_ENV "VK_LAYER_NAMES"
+#ifndef DEFAULT_VK_DRIVERS_PATH
// TODO: Is this a good default location?
// Need to search for both 32bit and 64bit ICDs
-#define DEFAULT_XGL_DRIVERS_PATH "C:\\Windows\\System32"
+#define DEFAULT_VK_DRIVERS_PATH "C:\\Windows\\System32"
// TODO/TBD: Is this an appropriate prefix for Windows?
-#define XGL_DRIVER_LIBRARY_PREFIX "XGL_"
-#define XGL_DRIVER_LIBRARY_PREFIX_LEN 4
+#define VK_DRIVER_LIBRARY_PREFIX "VK_"
+#define VK_DRIVER_LIBRARY_PREFIX_LEN 4
// TODO/TBD: Is this an appropriate suffix for Windows?
-#define XGL_LAYER_LIBRARY_PREFIX "XGLLayer"
-#define XGL_LAYER_LIBRARY_PREFIX_LEN 8
-#define XGL_LIBRARY_SUFFIX ".dll"
-#define XGL_LIBRARY_SUFFIX_LEN 4
-#endif // DEFAULT_XGL_DRIVERS_PATH
-#ifndef DEFAULT_XGL_LAYERS_PATH
+#define VK_LAYER_LIBRARY_PREFIX "VKLayer"
+#define VK_LAYER_LIBRARY_PREFIX_LEN 8
+#define VK_LIBRARY_SUFFIX ".dll"
+#define VK_LIBRARY_SUFFIX_LEN 4
+#endif // DEFAULT_VK_DRIVERS_PATH
+#ifndef DEFAULT_VK_LAYERS_PATH
// TODO: Is this a good default location?
-#define DEFAULT_XGL_LAYERS_PATH "C:\\Windows\\System32"
-#endif // DEFAULT_XGL_LAYERS_PATH
+#define DEFAULT_VK_LAYERS_PATH "C:\\Windows\\System32"
+#endif // DEFAULT_VK_LAYERS_PATH
// C99:
// Microsoft didn't implement C99 in Visual Studio; but started adding it with
#!/usr/bin/env python3
#
-# XGL
+# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
# code_gen.py overview
# This script generates code based on input headers
-# Initially it's intended to support Mantle and XGL headers and
+# Initially it's intended to support Mantle and VK headers and
# generate wrappers functions that can be used to display
# structs in a human-readable txt format, as well as utility functions
# to print enum values as strings
self.typedef_fwd_dict[base_type] = targ_type.strip(';')
self.typedef_rev_dict[targ_type.strip(';')] = base_type
elif parse_enum:
- #if 'XGL_MAX_ENUM' not in line and '{' not in line:
- if True not in [ens in line for ens in ['{', 'XGL_MAX_ENUM', '_RANGE']]:
+ #if 'VK_MAX_ENUM' not in line and '{' not in line:
+ if True not in [ens in line for ens in ['{', 'VK_MAX_ENUM', '_RANGE']]:
self._add_enum(line, base_type, default_enum_val)
default_enum_val += 1
elif parse_struct:
self.struct_dict = in_struct_dict
self.include_headers = []
self.api = prefix
- self.header_filename = os.path.join(out_dir, self.api+"_struct_wrappers.h")
- self.class_filename = os.path.join(out_dir, self.api+"_struct_wrappers.cpp")
- self.string_helper_filename = os.path.join(out_dir, self.api+"_struct_string_helper.h")
- self.string_helper_no_addr_filename = os.path.join(out_dir, self.api+"_struct_string_helper_no_addr.h")
- self.string_helper_cpp_filename = os.path.join(out_dir, self.api+"_struct_string_helper_cpp.h")
- self.string_helper_no_addr_cpp_filename = os.path.join(out_dir, self.api+"_struct_string_helper_no_addr_cpp.h")
- self.validate_helper_filename = os.path.join(out_dir, self.api+"_struct_validate_helper.h")
+ if prefix == "vulkan":
+ self.api_prefix = "vk"
+ else:
+ self.api_prefix = prefix
+ self.header_filename = os.path.join(out_dir, self.api_prefix+"_struct_wrappers.h")
+ self.class_filename = os.path.join(out_dir, self.api_prefix+"_struct_wrappers.cpp")
+ self.string_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper.h")
+ self.string_helper_no_addr_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_no_addr.h")
+ self.string_helper_cpp_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_cpp.h")
+ self.string_helper_no_addr_cpp_filename = os.path.join(out_dir, self.api_prefix+"_struct_string_helper_no_addr_cpp.h")
+ self.validate_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_validate_helper.h")
self.no_addr = False
self.hfg = CommonFileGen(self.header_filename)
self.cfg = CommonFileGen(self.class_filename)
self.shg = CommonFileGen(self.string_helper_filename)
self.shcppg = CommonFileGen(self.string_helper_cpp_filename)
self.vhg = CommonFileGen(self.validate_helper_filename)
- self.size_helper_filename = os.path.join(out_dir, self.api+"_struct_size_helper.h")
- self.size_helper_c_filename = os.path.join(out_dir, self.api+"_struct_size_helper.c")
+ self.size_helper_filename = os.path.join(out_dir, self.api_prefix+"_struct_size_helper.h")
+ self.size_helper_c_filename = os.path.join(out_dir, self.api_prefix+"_struct_size_helper.c")
self.size_helper_gen = CommonFileGen(self.size_helper_filename)
self.size_helper_c_gen = CommonFileGen(self.size_helper_c_filename)
#print(self.header_filename)
def _generateCppHeader(self):
header = []
header.append("//#includes, #defines, globals and such...\n")
- header.append("#include <stdio.h>\n#include <%s>\n#include <%s_enum_string_helper.h>\n" % (os.path.basename(self.header_filename), self.api))
+ header.append("#include <stdio.h>\n#include <%s>\n#include <%s_enum_string_helper.h>\n" % (os.path.basename(self.header_filename), self.api_prefix))
return "".join(header)
def _generateClassDefinition(self):
class_def = []
- if 'xgl' == self.api: # Mantle doesn't have pNext to worry about
+ if 'vk' == self.api: # Mantle doesn't have pNext to worry about
class_def.append(self._generateDynamicPrintFunctions())
for s in sorted(self.struct_dict):
class_def.append("\n// %s class definition" % self.get_class_name(s))
def _generateDynamicPrintFunctions(self):
dp_funcs = []
dp_funcs.append("\nvoid dynamic_display_full_txt(const void* pStruct, uint32_t indent)\n{\n // Cast to APP_INFO ptr initially just to pull sType off struct")
- dp_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;\n")
+ dp_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;\n")
dp_funcs.append(" switch (sType)\n {")
for e in enum_type_dict:
class_num = 0
return "\n".join(dp_funcs)
def _get_func_name(self, struct, mid_str):
- return "%s_%s_%s" % (self.api, mid_str, struct.lower().strip("_"))
+ return "%s_%s_%s" % (self.api_prefix, mid_str, struct.lower().strip("_"))
def _get_sh_func_name(self, struct):
return self._get_func_name(struct, 'print')
sh_funcs.append(" if (pStruct == NULL) {")
sh_funcs.append(" return NULL;")
sh_funcs.append(" }")
- sh_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;")
+ sh_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;")
sh_funcs.append(' char indent[100];\n strcpy(indent, " ");\n strcat(indent, prefix);')
sh_funcs.append(" switch (sType)\n {")
for e in enum_type_dict:
sh_funcs.append(" if (pStruct == NULL) {\n")
sh_funcs.append(" return NULL;")
sh_funcs.append(" }\n")
- sh_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;")
+ sh_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;")
sh_funcs.append(' string indent = " ";')
sh_funcs.append(' indent += prefix;')
sh_funcs.append(" switch (sType)\n {")
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
- if 'xgl_enum_string_helper' not in f:
+ if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
- header.append('#include "xgl_enum_string_helper.h"\n\n// Function Prototypes\n')
+ header.append('#include "vk_enum_string_helper.h"\n\n// Function Prototypes\n')
header.append("char* dynamic_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
- if 'xgl_enum_string_helper' not in f:
+ if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
- header.append('#include "xgl_enum_string_helper.h"\n')
+ header.append('#include "vk_enum_string_helper.h"\n')
header.append('using namespace std;\n\n// Function Prototypes\n')
header.append("string dynamic_display(const void* pStruct, const string prefix);\n")
return "".join(header)
for s in sorted(self.struct_dict):
sh_funcs.append('uint32_t %s(const %s* pStruct)\n{' % (self._get_vh_func_name(s), typedef_fwd_dict[s]))
for m in sorted(self.struct_dict[s]):
- # TODO : Need to handle arrays of enums like in XGL_RENDER_PASS_CREATE_INFO struct
+ # TODO : Need to handle arrays of enums like in VK_RENDER_PASS_CREATE_INFO struct
if is_type(self.struct_dict[s][m]['type'], 'enum') and not self.struct_dict[s][m]['ptr']:
sh_funcs.append(' if (!validate_%s(pStruct->%s))\n return 0;' % (self.struct_dict[s][m]['type'], self.struct_dict[s][m]['name']))
# TODO : Need a little refinement to this code to make sure type of struct matches expected input (ptr, const...)
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
- if 'xgl_enum_validate_helper' not in f:
+ if 'vk_enum_validate_helper' not in f:
header.append("#include <%s>\n" % f)
- header.append('#include "xgl_enum_validate_helper.h"\n\n// Function Prototypes\n')
+ header.append('#include "vk_enum_validate_helper.h"\n\n// Function Prototypes\n')
#header.append("char* dynamic_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
if not is_type(self.struct_dict[s][m]['type'], 'struct') and not 'char' in self.struct_dict[s][m]['type'].lower():
if 'ppMemBarriers' == self.struct_dict[s][m]['name']:
# TODO : For now be conservative and consider all memBarrier ptrs as largest possible struct
- sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(XGL_IMAGE_MEMORY_BARRIER));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type']))
+ sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(VK_IMAGE_MEMORY_BARRIER));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type']))
else:
sh_funcs.append('%sstructSize += pStruct->%s*(sizeof(%s*) + sizeof(%s));' % (indent, self.struct_dict[s][m]['array_size'], self.struct_dict[s][m]['type'], self.struct_dict[s][m]['type']))
else: # This is an array of char* or array of struct ptrs
else:
sh_funcs.append('size_t get_dynamic_struct_size(const void* pStruct)\n{')
indent = ' '
- sh_funcs.append('%s// Just use XGL_APPLICATION_INFO as struct until actual type is resolved' % (indent))
- sh_funcs.append('%sXGL_APPLICATION_INFO* pNext = (XGL_APPLICATION_INFO*)pStruct;' % (indent))
+ sh_funcs.append('%s// Just use VK_APPLICATION_INFO as struct until actual type is resolved' % (indent))
+ sh_funcs.append('%sVK_APPLICATION_INFO* pNext = (VK_APPLICATION_INFO*)pStruct;' % (indent))
sh_funcs.append('%ssize_t structSize = 0;' % (indent))
if follow_chain:
sh_funcs.append('%swhile (pNext) {' % (indent))
indent = indent[:-4]
sh_funcs.append('%s}' % (indent))
if follow_chain:
- sh_funcs.append('%spNext = (XGL_APPLICATION_INFO*)pNext->pNext;' % (indent))
+ sh_funcs.append('%spNext = (VK_APPLICATION_INFO*)pNext->pNext;' % (indent))
indent = indent[:-4]
sh_funcs.append('%s}' % (indent))
sh_funcs.append('%sreturn structSize;\n}' % indent)
def __init__(self, struct_dict, prefix, out_dir):
self.struct_dict = struct_dict
self.api = prefix
- self.out_file = os.path.join(out_dir, self.api+"_struct_graphviz_helper.h")
+ if prefix == "vulkan":
+ self.api_prefix = "vk"
+ else:
+ self.api_prefix = prefix
+ self.out_file = os.path.join(out_dir, self.api_prefix+"_struct_graphviz_helper.h")
self.gvg = CommonFileGen(self.out_file)
def generate(self):
header = []
header.append("//#includes, #defines, globals and such...\n")
for f in self.include_headers:
- if 'xgl_enum_string_helper' not in f:
+ if 'vk_enum_string_helper' not in f:
header.append("#include <%s>\n" % f)
- #header.append('#include "xgl_enum_string_helper.h"\n\n// Function Prototypes\n')
+ #header.append('#include "vk_enum_string_helper.h"\n\n// Function Prototypes\n')
header.append("\nchar* dynamic_gv_display(const void* pStruct, const char* prefix);\n")
return "".join(header)
def _get_gv_func_name(self, struct):
- return "%s_gv_print_%s" % (self.api, struct.lower().strip("_"))
+ return "%s_gv_print_%s" % (self.api_prefix, struct.lower().strip("_"))
# Return elements to create formatted string for given struct member
def _get_struct_gv_print_formatted(self, struct_member, pre_var_name="", postfix = "\\n", struct_var_name="pStruct", struct_ptr=True, print_array=False, port_label=""):
def _generateBody(self):
gv_funcs = []
array_func_list = [] # structs for which we'll generate an array version of their print function
- array_func_list.append('xgl_buffer_view_attach_info')
- array_func_list.append('xgl_image_view_attach_info')
- array_func_list.append('xgl_sampler_image_view_info')
- array_func_list.append('xgl_descriptor_type_count')
+ array_func_list.append('vk_buffer_view_attach_info')
+ array_func_list.append('vk_image_view_attach_info')
+ array_func_list.append('vk_sampler_image_view_info')
+ array_func_list.append('vk_descriptor_type_count')
# For first pass, generate prototype
for s in sorted(self.struct_dict):
gv_funcs.append('char* %s(const %s* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
if s.lower().strip("_") in array_func_list:
- if s.lower().strip("_") in ['xgl_buffer_view_attach_info', 'xgl_image_view_attach_info']:
+ if s.lower().strip("_") in ['vk_buffer_view_attach_info', 'vk_image_view_attach_info']:
gv_funcs.append('char* %s_array(uint32_t count, const %s* const* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
else:
gv_funcs.append('char* %s_array(uint32_t count, const %s* pStruct, const char* myNodeName);\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
gv_funcs.append(" return str;\n}\n")
if s.lower().strip("_") in array_func_list:
ptr_array = False
- if s.lower().strip("_") in ['xgl_buffer_view_attach_info', 'xgl_image_view_attach_info']:
+ if s.lower().strip("_") in ['vk_buffer_view_attach_info', 'vk_image_view_attach_info']:
ptr_array = True
gv_funcs.append('char* %s_array(uint32_t count, const %s* const* pStruct, const char* myNodeName)\n{\n char* str;\n char tmpStr[1024];\n' % (self._get_gv_func_name(s), typedef_fwd_dict[s]))
else:
# Add function to dynamically print out unknown struct
gv_funcs.append("char* dynamic_gv_display(const void* pStruct, const char* nodeName)\n{\n")
gv_funcs.append(" // Cast to APP_INFO ptr initially just to pull sType off struct\n")
- gv_funcs.append(" XGL_STRUCTURE_TYPE sType = ((XGL_APPLICATION_INFO*)pStruct)->sType;\n")
+ gv_funcs.append(" VK_STRUCTURE_TYPE sType = ((VK_APPLICATION_INFO*)pStruct)->sType;\n")
gv_funcs.append(" switch (sType)\n {\n")
for e in enum_type_dict:
if "_STRUCTURE_TYPE" in e:
struct_name = v.replace("_STRUCTURE_TYPE", "")
print_func_name = self._get_gv_func_name(struct_name)
# TODO : Hand-coded fixes for some exceptions
- #if 'XGL_PIPELINE_CB_STATE_CREATE_INFO' in struct_name:
- # struct_name = 'XGL_PIPELINE_CB_STATE'
- if 'XGL_SEMAPHORE_CREATE_INFO' in struct_name:
- struct_name = 'XGL_SEMAPHORE_CREATE_INFO'
+ #if 'VK_PIPELINE_CB_STATE_CREATE_INFO' in struct_name:
+ # struct_name = 'VK_PIPELINE_CB_STATE'
+ if 'VK_SEMAPHORE_CREATE_INFO' in struct_name:
+ struct_name = 'VK_SEMAPHORE_CREATE_INFO'
print_func_name = self._get_gv_func_name(struct_name)
- elif 'XGL_SEMAPHORE_OPEN_INFO' in struct_name:
- struct_name = 'XGL_SEMAPHORE_OPEN_INFO'
+ elif 'VK_SEMAPHORE_OPEN_INFO' in struct_name:
+ struct_name = 'VK_SEMAPHORE_OPEN_INFO'
print_func_name = self._get_gv_func_name(struct_name)
gv_funcs.append(' case %s:\n' % (v))
gv_funcs.append(' return %s((%s*)pStruct, nodeName);\n' % (print_func_name, struct_name))
#print(enum_val_dict)
#print(typedef_dict)
#print(struct_dict)
+ prefix = os.path.basename(opts.input_file).strip(".h")
+ if prefix == "vulkan":
+ prefix = "vk"
if (opts.abs_out_dir is not None):
- enum_sh_filename = os.path.join(opts.abs_out_dir, os.path.basename(opts.input_file).strip(".h")+"_enum_string_helper.h")
+ enum_sh_filename = os.path.join(opts.abs_out_dir, prefix+"_enum_string_helper.h")
else:
- enum_sh_filename = os.path.join(os.getcwd(), opts.rel_out_dir, os.path.basename(opts.input_file).strip(".h")+"_enum_string_helper.h")
+ enum_sh_filename = os.path.join(os.getcwd(), opts.rel_out_dir, prefix+"_enum_string_helper.h")
enum_sh_filename = os.path.abspath(enum_sh_filename)
if not os.path.exists(os.path.dirname(enum_sh_filename)):
print("Creating output dir %s" % os.path.dirname(enum_sh_filename))
os.mkdir(os.path.dirname(enum_sh_filename))
if opts.gen_enum_string_helper:
print("Generating enum string helper to %s" % enum_sh_filename)
- enum_vh_filename = os.path.join(os.path.dirname(enum_sh_filename), os.path.basename(opts.input_file).strip(".h")+"_enum_validate_helper.h")
+ enum_vh_filename = os.path.join(os.path.dirname(enum_sh_filename), prefix+"_enum_validate_helper.h")
print("Generating enum validate helper to %s" % enum_vh_filename)
eg = EnumCodeGen(enum_type_dict, enum_val_dict, typedef_fwd_dict, os.path.basename(opts.input_file), enum_sh_filename, enum_vh_filename)
eg.generateStringHelper()
#!/usr/bin/env python3
#
-# XGL
+# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
import xgl
def generate_get_proc_addr_check(name):
- return " if (!%s || %s[0] != 'x' || %s[1] != 'g' || %s[2] != 'l')\n" \
- " return NULL;" % ((name,) * 4)
+ return " if (!%s || %s[0] != 'v' || %s[1] != 'k')\n" \
+ " return NULL;" % ((name,) * 3)
class Subcommand(object):
def __init__(self, argv):
return """/* THIS FILE IS GENERATED. DO NOT EDIT. */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
def _generate_object_setup(self, proto):
method = "loader_init_data"
- cond = "res == XGL_SUCCESS"
+ cond = "res == VK_SUCCESS"
if "Get" in proto.name:
method = "loader_set_data"
for proto in self.protos:
if not self._is_dispatchable(proto):
continue
-
func = []
obj_setup = self._generate_object_setup(proto)
- func.append(qual + proto.c_func(prefix="xgl", attr="XGLAPI"))
+ func.append(qual + proto.c_func(prefix="vk", attr="VKAPI"))
func.append("{")
# declare local variables
- func.append(" const XGL_LAYER_DISPATCH_TABLE *disp;")
+ func.append(" const VK_LAYER_DISPATCH_TABLE *disp;")
if proto.ret != 'void' and obj_setup:
- func.append(" XGL_RESULT res;")
+ func.append(" VK_RESULT res;")
func.append("")
# active layers before dispatching CreateDevice
# get dispatch table and unwrap GPUs
for param in proto.params:
stmt = ""
- if param.ty == "XGL_PHYSICAL_GPU":
+ if param.ty == "VK_PHYSICAL_GPU":
stmt = "loader_unwrap_gpu(&%s);" % param.name
if param == proto.params[0]:
stmt = "disp = " + stmt
super().run()
def generate_header(self):
- return "\n".join(["#include <xgl.h>",
- "#include <xglLayer.h>",
+ return "\n".join(["#include <vulkan.h>",
+ "#include <vkLayer.h>",
"#include <string.h>",
"#include \"loader_platform.h\""])
stmts.append("table->%s = gpa; /* direct assignment */" %
proto.name)
else:
- stmts.append("table->%s = (xgl%sType) gpa(gpu, \"xgl%s\");" %
+ stmts.append("table->%s = (vk%sType) gpa(gpu, \"vk%s\");" %
(proto.name, proto.name, proto.name))
stmts.append("#endif")
func = []
- func.append("static inline void %s_initialize_dispatch_table(XGL_LAYER_DISPATCH_TABLE *table,"
+ func.append("static inline void %s_initialize_dispatch_table(VK_LAYER_DISPATCH_TABLE *table,"
% self.prefix)
- func.append("%s xglGetProcAddrType gpa,"
+ func.append("%s vkGetProcAddrType gpa,"
% (" " * len(self.prefix)))
- func.append("%s XGL_PHYSICAL_GPU gpu)"
+ func.append("%s VK_PHYSICAL_GPU gpu)"
% (" " * len(self.prefix)))
func.append("{")
func.append(" %s" % "\n ".join(stmts))
lookups.append("#endif")
func = []
- func.append("static inline void *%s_lookup_dispatch_table(const XGL_LAYER_DISPATCH_TABLE *table,"
+ func.append("static inline void *%s_lookup_dispatch_table(const VK_LAYER_DISPATCH_TABLE *table,"
% self.prefix)
func.append("%s const char *name)"
% (" " * len(self.prefix)))
func.append("{")
func.append(generate_get_proc_addr_check("name"))
func.append("")
- func.append(" name += 3;")
+ func.append(" name += 2;")
func.append(" %s" % "\n ".join(lookups))
func.append("")
func.append(" return NULL;")
self.prefix = self.argv[0]
self.qual = "static"
else:
- self.prefix = "xgl"
+ self.prefix = "vk"
self.qual = "ICD_EXPORT"
super().run()
return "#include \"icd.h\""
def _generate_stub_decl(self, proto):
- return proto.c_pretty_decl(self.prefix + proto.name, attr="XGLAPI")
+ return proto.c_pretty_decl(self.prefix + proto.name, attr="VKAPI")
def _generate_stubs(self):
stubs = []
for proto in self.protos:
decl = self._generate_stub_decl(proto)
if proto.ret != "void":
- stmt = " return XGL_ERROR_UNKNOWN;\n"
+ stmt = " return VK_ERROR_UNKNOWN;\n"
else:
stmt = ""
body.append("{")
body.append(generate_get_proc_addr_check(gpa_pname))
body.append("")
- body.append(" %s += 3;" % gpa_pname)
+ body.append(" %s += 2;" % gpa_pname)
body.append(" %s" % "\n ".join(lookups))
body.append("")
body.append(" return NULL;")
class LayerInterceptProcSubcommand(Subcommand):
def run(self):
- self.prefix = "xgl"
+ self.prefix = "vk"
# we could get the list from argv if wanted
self.intercepted = [proto.name for proto in self.protos
super().run()
def generate_header(self):
- return "\n".join(["#include <string.h>", "#include \"xglLayer.h\""])
+ return "\n".join(["#include <string.h>", "#include \"vkLayer.h\""])
def generate_body(self):
lookups = []
body.append("{")
body.append(generate_get_proc_addr_check("name"))
body.append("")
- body.append(" name += 3;")
+ body.append(" name += 2;")
body.append(" %s" % "\n ".join(lookups))
body.append("")
body.append(" return NULL;")
return """; THIS FILE IS GENERATED. DO NOT EDIT.
;;;; Begin Copyright Notice ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-; XGL
+; Vulkan
;
; Copyright (C) 2015 LunarG, Inc.
;
for proto in self.protos:
if self.exports and proto.name not in self.exports:
continue
- body.append(" xgl" + proto.name)
+ body.append(" vk" + proto.name)
return "\n".join(body)
#!/usr/bin/env python3
#
-# XGL
+# VK
#
# Copyright (C) 2014 LunarG, Inc.
#
import os
import xgl
-import xgl_helper
+import vk_helper
def generate_get_proc_addr_check(name):
- return " if (!%s || %s[0] != 'x' || %s[1] != 'g' || %s[2] != 'l')\n" \
- " return NULL;" % ((name,) * 4)
+ return " if (!%s || %s[0] != 'v' || %s[1] != 'k')\n" \
+ " return NULL;" % ((name,) * 3)
class Subcommand(object):
def __init__(self, argv):
return """/* THIS FILE IS GENERATED. DO NOT EDIT. */
/*
- * XGL
+ * Vulkan
*
* Copyright (C) 2014 LunarG, Inc.
*
pass
# Return set of printf '%' qualifier and input to that qualifier
- def _get_printf_params(self, xgl_type, name, output_param, cpp=False):
+ def _get_printf_params(self, vk_type, name, output_param, cpp=False):
# TODO : Need ENUM and STRUCT checks here
- if xgl_helper.is_type(xgl_type, 'enum'):#"_TYPE" in xgl_type: # TODO : This should be generic ENUM check
- return ("%s", "string_%s(%s)" % (xgl_type.strip('const ').strip('*'), name))
- if "char*" == xgl_type:
+ if vk_helper.is_type(vk_type, 'enum'):#"_TYPE" in vk_type: # TODO : This should be generic ENUM check
+ return ("%s", "string_%s(%s)" % (vk_type.strip('const ').strip('*'), name))
+ if "char*" == vk_type:
return ("%s", name)
- if "uint64" in xgl_type:
- if '*' in xgl_type:
+ if "uint64" in vk_type:
+ if '*' in vk_type:
return ("%lu", "*%s" % name)
return ("%lu", name)
- if "size" in xgl_type:
- if '*' in xgl_type:
+ if "size" in vk_type:
+ if '*' in vk_type:
return ("%zu", "*%s" % name)
return ("%zu", name)
- if "float" in xgl_type:
- if '[' in xgl_type: # handle array, current hard-coded to 4 (TODO: Make this dynamic)
+ if "float" in vk_type:
+ if '[' in vk_type: # handle array, current hard-coded to 4 (TODO: Make this dynamic)
if cpp:
return ("[%i, %i, %i, %i]", '"[" << %s[0] << "," << %s[1] << "," << %s[2] << "," << %s[3] << "]"' % (name, name, name, name))
return ("[%f, %f, %f, %f]", "%s[0], %s[1], %s[2], %s[3]" % (name, name, name, name))
return ("%f", name)
- if "bool" in xgl_type or 'xcb_randr_crtc_t' in xgl_type:
+ if "bool" in vk_type or 'xcb_randr_crtc_t' in vk_type:
return ("%u", name)
- if True in [t in xgl_type for t in ["int", "FLAGS", "MASK", "xcb_window_t"]]:
- if '[' in xgl_type: # handle array, current hard-coded to 4 (TODO: Make this dynamic)
+ if True in [t in vk_type for t in ["int", "FLAGS", "MASK", "xcb_window_t"]]:
+ if '[' in vk_type: # handle array, current hard-coded to 4 (TODO: Make this dynamic)
if cpp:
return ("[%i, %i, %i, %i]", "%s[0] << %s[1] << %s[2] << %s[3]" % (name, name, name, name))
return ("[%i, %i, %i, %i]", "%s[0], %s[1], %s[2], %s[3]" % (name, name, name, name))
- if '*' in xgl_type:
+ if '*' in vk_type:
if 'pUserData' == name:
return ("%i", "((pUserData == 0) ? 0 : *(pUserData))")
return ("%i", "*(%s)" % name)
return ("%i", name)
# TODO : This is special-cased as there's only one "format" param currently and it's nice to expand it
- if "XGL_FORMAT" == xgl_type:
+ if "VK_FORMAT" == vk_type:
if cpp:
return ("%p", "&%s" % name)
- return ("{%s.channelFormat = %%s, %s.numericFormat = %%s}" % (name, name), "string_XGL_CHANNEL_FORMAT(%s.channelFormat), string_XGL_NUM_FORMAT(%s.numericFormat)" % (name, name))
+ return ("{%s.channelFormat = %%s, %s.numericFormat = %%s}" % (name, name), "string_VK_CHANNEL_FORMAT(%s.channelFormat), string_VK_NUM_FORMAT(%s.numericFormat)" % (name, name))
if output_param:
return ("%p", "(void*)*%s" % name)
- if xgl_helper.is_type(xgl_type, 'struct') and '*' not in xgl_type:
+ if vk_helper.is_type(vk_type, 'struct') and '*' not in vk_type:
return ("%p", "(void*)(&%s)" % name)
return ("%p", "(void*)(%s)" % name)
def _gen_layer_dbg_callback_register(self):
r_body = []
- r_body.append('XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgRegisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)')
+ r_body.append('VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgRegisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback, void* pUserData)')
r_body.append('{')
r_body.append(' // This layer intercepts callbacks')
- r_body.append(' XGL_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (XGL_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(XGL_LAYER_DBG_FUNCTION_NODE));')
+ r_body.append(' VK_LAYER_DBG_FUNCTION_NODE *pNewDbgFuncNode = (VK_LAYER_DBG_FUNCTION_NODE*)malloc(sizeof(VK_LAYER_DBG_FUNCTION_NODE));')
r_body.append(' if (!pNewDbgFuncNode)')
- r_body.append(' return XGL_ERROR_OUT_OF_MEMORY;')
+ r_body.append(' return VK_ERROR_OUT_OF_MEMORY;')
r_body.append(' pNewDbgFuncNode->pfnMsgCallback = pfnMsgCallback;')
r_body.append(' pNewDbgFuncNode->pUserData = pUserData;')
r_body.append(' pNewDbgFuncNode->pNext = g_pDbgFunctionHead;')
r_body.append(' g_pDbgFunctionHead = pNewDbgFuncNode;')
r_body.append(' // force callbacks if DebugAction hasn\'t been set already other than initial value')
r_body.append(' if (g_actionIsDefault) {')
- r_body.append(' g_debugAction = XGL_DBG_LAYER_ACTION_CALLBACK;')
+ r_body.append(' g_debugAction = VK_DBG_LAYER_ACTION_CALLBACK;')
r_body.append(' }')
- r_body.append(' XGL_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);')
+ r_body.append(' VK_RESULT result = nextTable.DbgRegisterMsgCallback(instance, pfnMsgCallback, pUserData);')
r_body.append(' return result;')
r_body.append('}')
return "\n".join(r_body)
def _gen_layer_dbg_callback_unregister(self):
ur_body = []
- ur_body.append('XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglDbgUnregisterMsgCallback(XGL_INSTANCE instance, XGL_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)')
+ ur_body.append('VK_LAYER_EXPORT VK_RESULT VKAPI vkDbgUnregisterMsgCallback(VK_INSTANCE instance, VK_DBG_MSG_CALLBACK_FUNCTION pfnMsgCallback)')
ur_body.append('{')
- ur_body.append(' XGL_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;')
- ur_body.append(' XGL_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;')
+ ur_body.append(' VK_LAYER_DBG_FUNCTION_NODE *pTrav = g_pDbgFunctionHead;')
+ ur_body.append(' VK_LAYER_DBG_FUNCTION_NODE *pPrev = pTrav;')
ur_body.append(' while (pTrav) {')
ur_body.append(' if (pTrav->pfnMsgCallback == pfnMsgCallback) {')
ur_body.append(' pPrev->pNext = pTrav->pNext;')
ur_body.append(' if (g_pDbgFunctionHead == NULL)')
ur_body.append(' {')
ur_body.append(' if (g_actionIsDefault)')
- ur_body.append(' g_debugAction = XGL_DBG_LAYER_ACTION_LOG_MSG;')
+ ur_body.append(' g_debugAction = VK_DBG_LAYER_ACTION_LOG_MSG;')
ur_body.append(' else')
- ur_body.append(' g_debugAction &= ~XGL_DBG_LAYER_ACTION_CALLBACK;')
+ ur_body.append(' g_debugAction &= ~VK_DBG_LAYER_ACTION_CALLBACK;')
ur_body.append(' }')
- ur_body.append(' XGL_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);')
+ ur_body.append(' VK_RESULT result = nextTable.DbgUnregisterMsgCallback(instance, pfnMsgCallback);')
ur_body.append(' return result;')
ur_body.append('}')
return "\n".join(ur_body)
def _gen_layer_get_extension_support(self, layer="Generic"):
ges_body = []
- ges_body.append('XGL_LAYER_EXPORT XGL_RESULT XGLAPI xglGetExtensionSupport(XGL_PHYSICAL_GPU gpu, const char* pExtName)')
+ ges_body.append('VK_LAYER_EXPORT VK_RESULT VKAPI vkGetExtensionSupport(VK_PHYSICAL_GPU gpu, const char* pExtName)')
ges_body.append('{')
- ges_body.append(' XGL_RESULT result;')
- ges_body.append(' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;')
+ ges_body.append(' VK_RESULT result;')
+ ges_body.append(' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;')
ges_body.append('')
ges_body.append(' /* This entrypoint is NOT going to init its own dispatch table since loader calls here early */')
ges_body.append(' if (!strncmp(pExtName, "%s", strlen("%s")))' % (layer, layer))
ges_body.append(' {')
- ges_body.append(' result = XGL_SUCCESS;')
+ ges_body.append(' result = VK_SUCCESS;')
ges_body.append(' } else if (nextTable.GetExtensionSupport != NULL)')
ges_body.append(' {')
- ges_body.append(' result = nextTable.GetExtensionSupport((XGL_PHYSICAL_GPU)gpuw->nextObject, pExtName);')
+ ges_body.append(' result = nextTable.GetExtensionSupport((VK_PHYSICAL_GPU)gpuw->nextObject, pExtName);')
ges_body.append(' } else')
ges_body.append(' {')
- ges_body.append(' result = XGL_ERROR_INVALID_EXTENSION;')
+ ges_body.append(' result = VK_ERROR_INVALID_EXTENSION;')
ges_body.append(' }')
ges_body.append(' return result;')
ges_body.append('}')
funcs.append(intercept)
intercepted.append(proto)
- prefix="xgl"
+ prefix="vk"
lookups = []
for proto in intercepted:
if 'WsiX11' in proto.name:
body.append("{")
body.append(generate_get_proc_addr_check("name"))
body.append("")
- body.append(" name += 3;")
+ body.append(" name += 2;")
body.append(" %s" % "\n ".join(lookups))
body.append("")
body.append(" return NULL;")
def _generate_extensions(self):
exts = []
- exts.append('uint64_t objTrackGetObjectCount(XGL_OBJECT_TYPE type)')
+ exts.append('uint64_t objTrackGetObjectCount(VK_OBJECT_TYPE type)')
exts.append('{')
- exts.append(' return (type == XGL_OBJECT_TYPE_ANY) ? numTotalObjs : numObjs[type];')
+ exts.append(' return (type == VK_OBJECT_TYPE_ANY) ? numTotalObjs : numObjs[type];')
exts.append('}')
exts.append('')
- exts.append('XGL_RESULT objTrackGetObjects(XGL_OBJECT_TYPE type, uint64_t objCount, OBJTRACK_NODE* pObjNodeArray)')
+ exts.append('VK_RESULT objTrackGetObjects(VK_OBJECT_TYPE type, uint64_t objCount, OBJTRACK_NODE* pObjNodeArray)')
exts.append('{')
exts.append(" // This bool flags if we're pulling all objs or just a single class of objs")
- exts.append(' bool32_t bAllObjs = (type == XGL_OBJECT_TYPE_ANY);')
+ exts.append(' bool32_t bAllObjs = (type == VK_OBJECT_TYPE_ANY);')
exts.append(' // Check the count first thing')
exts.append(' uint64_t maxObjCount = (bAllObjs) ? numTotalObjs : numObjs[type];')
exts.append(' if (objCount > maxObjCount) {')
exts.append(' char str[1024];')
- exts.append(' sprintf(str, "OBJ ERROR : Received objTrackGetObjects() request for %lu objs, but there are only %lu objs of type %s", objCount, maxObjCount, string_XGL_OBJECT_TYPE(type));')
- exts.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, 0, 0, OBJTRACK_OBJCOUNT_MAX_EXCEEDED, "OBJTRACK", str);')
- exts.append(' return XGL_ERROR_INVALID_VALUE;')
+ exts.append(' sprintf(str, "OBJ ERROR : Received objTrackGetObjects() request for %lu objs, but there are only %lu objs of type %s", objCount, maxObjCount, string_VK_OBJECT_TYPE(type));')
+ exts.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, 0, 0, OBJTRACK_OBJCOUNT_MAX_EXCEEDED, "OBJTRACK", str);')
+ exts.append(' return VK_ERROR_INVALID_VALUE;')
exts.append(' }')
exts.append(' objNode* pTrav = (bAllObjs) ? pGlobalHead : pObjectHead[type];')
exts.append(' for (uint64_t i = 0; i < objCount; i++) {')
exts.append(' if (!pTrav) {')
exts.append(' char str[1024];')
- exts.append(' sprintf(str, "OBJ INTERNAL ERROR : Ran out of %s objs! Should have %lu, but only copied %lu and not the requested %lu.", string_XGL_OBJECT_TYPE(type), maxObjCount, i, objCount);')
- exts.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, 0, 0, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", str);')
- exts.append(' return XGL_ERROR_UNKNOWN;')
+ exts.append(' sprintf(str, "OBJ INTERNAL ERROR : Ran out of %s objs! Should have %lu, but only copied %lu and not the requested %lu.", string_VK_OBJECT_TYPE(type), maxObjCount, i, objCount);')
+ exts.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, 0, 0, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", str);')
+ exts.append(' return VK_ERROR_UNKNOWN;')
exts.append(' }')
exts.append(' memcpy(&pObjNodeArray[i], pTrav, sizeof(OBJTRACK_NODE));')
exts.append(' pTrav = (bAllObjs) ? pTrav->pNextGlobal : pTrav->pNextObj;')
exts.append(' }')
- exts.append(' return XGL_SUCCESS;')
+ exts.append(' return VK_SUCCESS;')
exts.append('}')
return "\n".join(exts)
def _generate_layer_gpa_function(self, extensions=[]):
func_body = []
- func_body.append("XGL_LAYER_EXPORT void* XGLAPI xglGetProcAddr(XGL_PHYSICAL_GPU gpu, const char* funcName)\n"
+ func_body.append("VK_LAYER_EXPORT void* VKAPI vkGetProcAddr(VK_PHYSICAL_GPU gpu, const char* funcName)\n"
"{\n"
- " XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) gpu;\n"
+ " VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) gpu;\n"
" void* addr;\n"
" if (gpu == NULL)\n"
" return NULL;\n"
func_body.append(" else {\n"
" if (gpuw->pGPA == NULL)\n"
" return NULL;\n"
- " return gpuw->pGPA((XGL_PHYSICAL_GPU)gpuw->nextObject, funcName);\n"
+ " return gpuw->pGPA((VK_PHYSICAL_GPU)gpuw->nextObject, funcName);\n"
" }\n"
"}\n")
return "\n".join(func_body)
- def _generate_layer_initialization(self, init_opts=False, prefix='xgl', lockname=None):
- func_body = ["#include \"xgl_dispatch_table_helper.h\""]
+ def _generate_layer_initialization(self, init_opts=False, prefix='vk', lockname=None):
+ func_body = ["#include \"vk_dispatch_table_helper.h\""]
func_body.append('static void init%s(void)\n'
'{\n' % self.layer_name)
if init_opts:
func_body.append(' getLayerOptionEnum("%sReportLevel", (uint32_t *) &g_reportingLevel);' % self.layer_name)
func_body.append(' g_actionIsDefault = getLayerOptionEnum("%sDebugAction", (uint32_t *) &g_debugAction);' % self.layer_name)
func_body.append('')
- func_body.append(' if (g_debugAction & XGL_DBG_LAYER_ACTION_LOG_MSG)')
+ func_body.append(' if (g_debugAction & VK_DBG_LAYER_ACTION_LOG_MSG)')
func_body.append(' {')
func_body.append(' strOpt = getLayerOption("%sLogFilename");' % self.layer_name)
func_body.append(' if (strOpt)')
func_body.append(' g_logFile = stdout;')
func_body.append(' }')
func_body.append('')
- func_body.append(' xglGetProcAddrType fpNextGPA;\n'
+ func_body.append(' vkGetProcAddrType fpNextGPA;\n'
' fpNextGPA = pCurObj->pGPA;\n'
' assert(fpNextGPA);\n')
- func_body.append(" layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);")
+ func_body.append(" layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);")
if lockname is not None:
func_body.append(" if (!%sLockInitialized)" % lockname)
func_body.append(" {")
func_body.append("}\n")
return "\n".join(func_body)
- def _generate_layer_initialization_with_lock(self, prefix='xgl'):
- func_body = ["#include \"xgl_dispatch_table_helper.h\""]
+ def _generate_layer_initialization_with_lock(self, prefix='vk'):
+ func_body = ["#include \"vk_dispatch_table_helper.h\""]
func_body.append('static void init%s(void)\n'
'{\n'
- ' xglGetProcAddrType fpNextGPA;\n'
+ ' vkGetProcAddrType fpNextGPA;\n'
' fpNextGPA = pCurObj->pGPA;\n'
' assert(fpNextGPA);\n' % self.layer_name);
- func_body.append(" layer_initialize_dispatch_table(&nextTable, fpNextGPA, (XGL_PHYSICAL_GPU) pCurObj->nextObject);\n")
+ func_body.append(" layer_initialize_dispatch_table(&nextTable, fpNextGPA, (VK_PHYSICAL_GPU) pCurObj->nextObject);\n")
func_body.append(" if (!printLockInitialized)")
func_body.append(" {")
func_body.append(" // TODO/TBD: Need to delete this mutex sometime. How???")
class LayerFuncsSubcommand(Subcommand):
def generate_header(self):
- return '#include <xglLayer.h>\n#include "loader.h"'
+ return '#include <vkLayer.h>\n#include "loader.h"'
def generate_body(self):
return self._generate_dispatch_entrypoints("static")
class GenericLayerSubcommand(Subcommand):
def generate_header(self):
- return '#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include "loader_platform.h"\n#include "xglLayer.h"\n//The following is #included again to catch certain OS-specific functions being used:\n#include "loader_platform.h"\n\n#include "layers_config.h"\n#include "layers_msg.h"\n\nstatic XGL_LAYER_DISPATCH_TABLE nextTable;\nstatic XGL_BASE_LAYER_OBJECT *pCurObj;\n\nstatic LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);'
+ return '#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include "loader_platform.h"\n#include "vkLayer.h"\n//The following is #included again to catch certain OS-specific functions being used:\n#include "loader_platform.h"\n\n#include "layers_config.h"\n#include "layers_msg.h"\n\nstatic VK_LAYER_DISPATCH_TABLE nextTable;\nstatic VK_BASE_LAYER_OBJECT *pCurObj;\n\nstatic LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);'
def generate_intercept(self, proto, qual):
if proto.name in [ 'DbgRegisterMsgCallback', 'DbgUnregisterMsgCallback' , 'GetExtensionSupport']:
# use default version
return None
- decl = proto.c_func(prefix="xgl", attr="XGLAPI")
+ decl = proto.c_func(prefix="vk", attr="VKAPI")
param0_name = proto.params[0].name
ret_val = ''
stmt = ''
funcs = []
if proto.ret != "void":
- ret_val = "XGL_RESULT result = "
+ ret_val = "VK_RESULT result = "
stmt = " return result;\n"
if 'WsiX11AssociateConnection' == proto.name:
funcs.append("#if defined(__linux__) || defined(XCB_NVIDIA)")
if proto.name == "EnumerateLayers":
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' char str[1024];\n'
' if (gpu != NULL) {\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' sprintf(str, "At start of layered %s\\n");\n'
- ' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, gpu, 0, 0, (char *) "GENERIC", (char *) str);\n'
+ ' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, gpu, 0, 0, (char *) "GENERIC", (char *) str);\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
' sprintf(str, "Completed layered %s\\n");\n'
- ' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, gpu, 0, 0, (char *) "GENERIC", (char *) str);\n'
+ ' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, gpu, 0, 0, (char *) "GENERIC", (char *) str);\n'
' fflush(stdout);\n'
' %s'
' } else {\n'
' if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)\n'
- ' return XGL_ERROR_INVALID_POINTER;\n'
+ ' return VK_ERROR_INVALID_POINTER;\n'
' // This layer compatible with all GPUs\n'
' *pOutLayerCount = 1;\n'
' strncpy((char *) pOutLayers[0], "%s", maxStringSize);\n'
- ' return XGL_SUCCESS;\n'
+ ' return VK_SUCCESS;\n'
' }\n'
'}' % (qual, decl, proto.params[0].name, proto.name, self.layer_name, ret_val, c_call, proto.name, stmt, self.layer_name))
- elif proto.params[0].ty != "XGL_PHYSICAL_GPU":
+ elif proto.params[0].ty != "VK_PHYSICAL_GPU":
funcs.append('%s%s\n'
'{\n'
' %snextTable.%s;\n'
'%s'
'}' % (qual, decl, ret_val, proto.c_call(), stmt))
else:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' char str[1024];'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' sprintf(str, "At start of layered %s\\n");\n'
- ' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, gpuw, 0, 0, (char *) "GENERIC", (char *) str);\n'
+ ' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, gpuw, 0, 0, (char *) "GENERIC", (char *) str);\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
' sprintf(str, "Completed layered %s\\n");\n'
- ' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, gpuw, 0, 0, (char *) "GENERIC", (char *) str);\n'
+ ' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, gpuw, 0, 0, (char *) "GENERIC", (char *) str);\n'
' fflush(stdout);\n'
'%s'
'}' % (qual, decl, proto.params[0].name, proto.name, self.layer_name, ret_val, c_call, proto.name, stmt))
def generate_body(self):
self.layer_name = "Generic"
body = [self._generate_layer_initialization(True),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
return "\n".join(header_txt)
def generate_intercept(self, proto, qual):
- decl = proto.c_func(prefix="xgl", attr="XGLAPI")
+ decl = proto.c_func(prefix="vk", attr="VKAPI")
param0_name = proto.params[0].name
ret_val = ''
stmt = ''
elif 'Create' in proto.name or 'Alloc' in proto.name or 'MapMemory' in proto.name:
create_params = -1
if proto.ret != "void":
- ret_val = "XGL_RESULT result = "
+ ret_val = "VK_RESULT result = "
stmt = " return result;\n"
f_open = ''
f_close = ''
if 'CreateDevice' in proto.name:
file_mode = "w"
f_open = 'loader_platform_thread_lock_mutex(&printLock);\n pOutFile = fopen(outFileName, "%s");\n ' % (file_mode)
- log_func = 'fprintf(pOutFile, "t{%%u} xgl%s(' % proto.name
+ log_func = 'fprintf(pOutFile, "t{%%u} vk%s(' % proto.name
f_close = '\n fclose(pOutFile);\n loader_platform_thread_unlock_mutex(&printLock);'
else:
f_open = 'loader_platform_thread_lock_mutex(&printLock);\n '
- log_func = 'printf("t{%%u} xgl%s(' % proto.name
+ log_func = 'printf("t{%%u} vk%s(' % proto.name
f_close = '\n loader_platform_thread_unlock_mutex(&printLock);'
print_vals = ', getTIDIndex()'
pindex = 0
sp_param_dict[pindex] = prev_count_name
elif 'pDescriptorSets' == p.name and proto.params[-1].name == 'pCount':
sp_param_dict[pindex] = '*pCount'
- elif 'Wsi' not in proto.name and xgl_helper.is_type(p.ty.strip('*').strip('const '), 'struct'):
+ elif 'Wsi' not in proto.name and vk_helper.is_type(p.ty.strip('*').strip('const '), 'struct'):
sp_param_dict[pindex] = 'index'
pindex += 1
if p.name.endswith('Count'):
log_func = log_func.strip(', ')
if proto.ret != "void":
log_func += ') = %s\\n"'
- print_vals += ', string_XGL_RESULT(result)'
+ print_vals += ', string_VK_RESULT(result)'
else:
log_func += ')\\n"'
log_func = '%s%s);' % (log_func, print_vals)
for sp_index in sorted(sp_param_dict):
# TODO : Clean this if/else block up, too much duplicated code
if 'index' == sp_param_dict[sp_index]:
- cis_print_func = 'xgl_print_%s' % (proto.params[sp_index].ty.strip('const ').strip('*').lower())
+ cis_print_func = 'vk_print_%s' % (proto.params[sp_index].ty.strip('const ').strip('*').lower())
var_name = proto.params[sp_index].name
if proto.params[sp_index].name != 'color':
log_func += '\n if (%s) {' % (proto.params[sp_index].name)
if proto.params[sp_index].name != 'color':
log_func += '\n }'
else: # should have a count value stored to iterate over array
- if xgl_helper.is_type(proto.params[sp_index].ty.strip('*').strip('const '), 'struct'):
- cis_print_func = 'pTmpStr = xgl_print_%s(&%s[i], " ");' % (proto.params[sp_index].ty.strip('const ').strip('*').lower(), proto.params[sp_index].name)
+ if vk_helper.is_type(proto.params[sp_index].ty.strip('*').strip('const '), 'struct'):
+ cis_print_func = 'pTmpStr = vk_print_%s(&%s[i], " ");' % (proto.params[sp_index].ty.strip('const ').strip('*').lower(), proto.params[sp_index].name)
else:
cis_print_func = 'pTmpStr = (char*)malloc(32);\n sprintf(pTmpStr, " %%p", %s[i]);' % proto.params[sp_index].name
if not i_decl:
if 'WsiX11AssociateConnection' == proto.name:
funcs.append("#if defined(__linux__) || defined(XCB_NVIDIA)")
if proto.name == "EnumerateLayers":
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' if (gpu != NULL) {\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
' %s'
' } else {\n'
' if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)\n'
- ' return XGL_ERROR_INVALID_POINTER;\n'
+ ' return VK_ERROR_INVALID_POINTER;\n'
' // This layer compatible with all GPUs\n'
' *pOutLayerCount = 1;\n'
' strncpy((char *) pOutLayers[0], "%s", maxStringSize);\n'
- ' return XGL_SUCCESS;\n'
+ ' return VK_SUCCESS;\n'
' }\n'
'}' % (qual, decl, proto.params[0].name, self.layer_name, ret_val, c_call,f_open, log_func, f_close, stmt, self.layer_name))
elif 'GetExtensionSupport' == proto.name:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
- ' XGL_RESULT result;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_RESULT result;\n'
' /* This entrypoint is NOT going to init its own dispatch table since loader calls here early */\n'
' if (!strncmp(pExtName, "%s", strlen("%s")))\n'
' {\n'
- ' result = XGL_SUCCESS;\n'
+ ' result = VK_SUCCESS;\n'
' } else if (nextTable.GetExtensionSupport != NULL)\n'
' {\n'
' result = nextTable.%s;\n'
' %s %s %s\n'
' } else\n'
' {\n'
- ' result = XGL_ERROR_INVALID_EXTENSION;\n'
+ ' result = VK_ERROR_INVALID_EXTENSION;\n'
' }\n'
'%s'
'}' % (qual, decl, proto.params[0].name, self.layer_name, self.layer_name, c_call, f_open, log_func, f_close, stmt))
- elif proto.params[0].ty != "XGL_PHYSICAL_GPU":
+ elif proto.params[0].ty != "VK_PHYSICAL_GPU":
funcs.append('%s%s\n'
'{\n'
' %snextTable.%s;\n'
'%s'
'}' % (qual, decl, ret_val, proto.c_call(), f_open, log_func, f_close, stmt))
else:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
def generate_body(self):
self.layer_name = "APIDump"
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper_cpp.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper_cpp.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
return "\n".join(header_txt)
def generate_intercept(self, proto, qual):
- decl = proto.c_func(prefix="xgl", attr="XGLAPI")
+ decl = proto.c_func(prefix="vk", attr="VKAPI")
param0_name = proto.params[0].name
ret_val = ''
stmt = ''
elif 'Create' in proto.name or 'Alloc' in proto.name or 'MapMemory' in proto.name:
create_params = -1
if proto.ret != "void":
- ret_val = "XGL_RESULT result = "
+ ret_val = "VK_RESULT result = "
stmt = " return result;\n"
f_open = ''
f_close = ''
if 'CreateDevice' in proto.name:
file_mode = "w"
f_open = 'loader_platform_thread_lock_mutex(&printLock);\n pOutFile = fopen(outFileName, "%s");\n ' % (file_mode)
- log_func = 'fprintf(pOutFile, "t{%%u} xgl%s(' % proto.name
+ log_func = 'fprintf(pOutFile, "t{%%u} vk%s(' % proto.name
f_close = '\n fclose(pOutFile);\n loader_platform_thread_unlock_mutex(&printLock);'
else:
f_open = 'loader_platform_thread_lock_mutex(&printLock);\n '
- log_func = 'cout << "t{" << getTIDIndex() << "} xgl%s(' % proto.name
+ log_func = 'cout << "t{" << getTIDIndex() << "} vk%s(' % proto.name
f_close = '\n loader_platform_thread_unlock_mutex(&printLock);'
pindex = 0
prev_count_name = ''
sp_param_dict[pindex] = prev_count_name
elif 'pDescriptorSets' == p.name and proto.params[-1].name == 'pCount':
sp_param_dict[pindex] = '*pCount'
- elif 'Wsi' not in proto.name and xgl_helper.is_type(p.ty.strip('*').strip('const '), 'struct'):
+ elif 'Wsi' not in proto.name and vk_helper.is_type(p.ty.strip('*').strip('const '), 'struct'):
sp_param_dict[pindex] = 'index'
pindex += 1
if p.name.endswith('Count'):
prev_count_name = ''
log_func = log_func.strip(', ')
if proto.ret != "void":
- log_func += ') = " << string_XGL_RESULT((XGL_RESULT)result) << endl'
- #print_vals += ', string_XGL_RESULT_CODE(result)'
+ log_func += ') = " << string_VK_RESULT((VK_RESULT)result) << endl'
+ #print_vals += ', string_VK_RESULT_CODE(result)'
else:
log_func += ')\\n"'
log_func += ';'
log_func += '\n string tmp_str;'
for sp_index in sp_param_dict:
if 'index' == sp_param_dict[sp_index]:
- cis_print_func = 'xgl_print_%s' % (proto.params[sp_index].ty.strip('const ').strip('*').lower())
+ cis_print_func = 'vk_print_%s' % (proto.params[sp_index].ty.strip('const ').strip('*').lower())
var_name = proto.params[sp_index].name
if proto.params[sp_index].name != 'color':
log_func += '\n if (%s) {' % (proto.params[sp_index].name)
else: # We have a count value stored to iterate over an array
print_cast = ''
print_func = ''
- if xgl_helper.is_type(proto.params[sp_index].ty.strip('*').strip('const '), 'struct'):
+ if vk_helper.is_type(proto.params[sp_index].ty.strip('*').strip('const '), 'struct'):
print_cast = '&'
- print_func = 'xgl_print_%s' % proto.params[sp_index].ty.strip('const ').strip('*').lower()
- #cis_print_func = 'tmp_str = xgl_print_%s(&%s[i], " ");' % (proto.params[sp_index].ty.strip('const ').strip('*').lower(), proto.params[sp_index].name)
+ print_func = 'vk_print_%s' % proto.params[sp_index].ty.strip('const ').strip('*').lower()
+ #cis_print_func = 'tmp_str = vk_print_%s(&%s[i], " ");' % (proto.params[sp_index].ty.strip('const ').strip('*').lower(), proto.params[sp_index].name)
# TODO : Need to display this address as a string
else:
print_cast = '(void*)'
if 'WsiX11AssociateConnection' == proto.name:
funcs.append("#if defined(__linux__) || defined(XCB_NVIDIA)")
if proto.name == "EnumerateLayers":
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' if (gpu != NULL) {\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
' %s'
' } else {\n'
' if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)\n'
- ' return XGL_ERROR_INVALID_POINTER;\n'
+ ' return VK_ERROR_INVALID_POINTER;\n'
' // This layer compatible with all GPUs\n'
' *pOutLayerCount = 1;\n'
' strncpy((char *) pOutLayers[0], "%s", maxStringSize);\n'
- ' return XGL_SUCCESS;\n'
+ ' return VK_SUCCESS;\n'
' }\n'
'}' % (qual, decl, proto.params[0].name, self.layer_name, ret_val, c_call,f_open, log_func, f_close, stmt, self.layer_name))
elif 'GetExtensionSupport' == proto.name:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
- ' XGL_RESULT result;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_RESULT result;\n'
' /* This entrypoint is NOT going to init its own dispatch table since loader calls here early */\n'
' if (!strncmp(pExtName, "%s", strlen("%s")))\n'
' {\n'
- ' result = XGL_SUCCESS;\n'
+ ' result = VK_SUCCESS;\n'
' } else if (nextTable.GetExtensionSupport != NULL)\n'
' {\n'
' result = nextTable.%s;\n'
' %s %s %s\n'
' } else\n'
' {\n'
- ' result = XGL_ERROR_INVALID_EXTENSION;\n'
+ ' result = VK_ERROR_INVALID_EXTENSION;\n'
' }\n'
'%s'
'}' % (qual, decl, proto.params[0].name, self.layer_name, self.layer_name, c_call, f_open, log_func, f_close, stmt))
- elif proto.params[0].ty != "XGL_PHYSICAL_GPU":
+ elif proto.params[0].ty != "VK_PHYSICAL_GPU":
funcs.append('%s%s\n'
'{\n'
' %snextTable.%s;\n'
'%s'
'}' % (qual, decl, ret_val, proto.c_call(), f_open, log_func, f_close, stmt))
else:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %snextTable.%s;\n'
def generate_body(self):
self.layer_name = "APIDumpCpp"
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
header_txt.append(' assert(maxTID < MAX_TID);')
header_txt.append(' return retVal;')
header_txt.append('}\n')
- header_txt.append('static FILE* pOutFile;\nstatic char* outFileName = "xgl_apidump.txt";')
+ header_txt.append('static FILE* pOutFile;\nstatic char* outFileName = "vk_apidump.txt";')
return "\n".join(header_txt)
def generate_body(self):
self.layer_name = "APIDumpFile"
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper_no_addr.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper_no_addr.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
self.layer_name = "APIDumpNoAddr"
self.no_addr = True
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('#include "xglLayer.h"\n#include "xgl_struct_string_helper_no_addr_cpp.h"\n')
+ header_txt.append('#include "vkLayer.h"\n#include "vk_struct_string_helper_no_addr_cpp.h"\n')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
- header_txt.append('static XGL_LAYER_DISPATCH_TABLE nextTable;')
- header_txt.append('static XGL_BASE_LAYER_OBJECT *pCurObj;\n')
+ header_txt.append('static VK_LAYER_DISPATCH_TABLE nextTable;')
+ header_txt.append('static VK_BASE_LAYER_OBJECT *pCurObj;\n')
header_txt.append('static LOADER_PLATFORM_THREAD_ONCE_DECLARATION(tabOnce);')
header_txt.append('static int printLockInitialized = 0;')
header_txt.append('static loader_platform_thread_mutex printLock;\n')
self.layer_name = "APIDumpNoAddrCpp"
self.no_addr = True
body = [self._generate_layer_initialization_with_lock(),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_layer_gpa_function()]
return "\n\n".join(body)
def generate_header(self):
header_txt = []
header_txt.append('#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include "loader_platform.h"')
- header_txt.append('#include "object_track.h"\n\nstatic XGL_LAYER_DISPATCH_TABLE nextTable;\nstatic XGL_BASE_LAYER_OBJECT *pCurObj;')
+ header_txt.append('#include "object_track.h"\n\nstatic VK_LAYER_DISPATCH_TABLE nextTable;\nstatic VK_BASE_LAYER_OBJECT *pCurObj;')
header_txt.append('// The following is #included again to catch certain OS-specific functions being used:')
header_txt.append('#include "loader_platform.h"')
header_txt.append('#include "layers_config.h"')
header_txt.append(' struct _objNode *pNextObj;')
header_txt.append(' struct _objNode *pNextGlobal;')
header_txt.append('} objNode;')
- header_txt.append('static objNode *pObjectHead[XGL_NUM_OBJECT_TYPE] = {0};')
+ header_txt.append('static objNode *pObjectHead[VK_NUM_OBJECT_TYPE] = {0};')
header_txt.append('static objNode *pGlobalHead = NULL;')
- header_txt.append('static uint64_t numObjs[XGL_NUM_OBJECT_TYPE] = {0};')
+ header_txt.append('static uint64_t numObjs[VK_NUM_OBJECT_TYPE] = {0};')
header_txt.append('static uint64_t numTotalObjs = 0;')
header_txt.append('static uint32_t maxMemReferences = 0;')
header_txt.append('// Debug function to print global list and each individual object list')
header_txt.append(' objNode* pTrav = pGlobalHead;')
header_txt.append(' printf("=====GLOBAL OBJECT LIST (%lu total objs):\\n", numTotalObjs);')
header_txt.append(' while (pTrav) {')
- header_txt.append(' printf(" ObjNode (%p) w/ %s obj %p has pNextGlobal %p\\n", (void*)pTrav, string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, (void*)pTrav->pNextGlobal);')
+ header_txt.append(' printf(" ObjNode (%p) w/ %s obj %p has pNextGlobal %p\\n", (void*)pTrav, string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, (void*)pTrav->pNextGlobal);')
header_txt.append(' pTrav = pTrav->pNextGlobal;')
header_txt.append(' }')
- header_txt.append(' for (uint32_t i = 0; i < XGL_NUM_OBJECT_TYPE; i++) {')
+ header_txt.append(' for (uint32_t i = 0; i < VK_NUM_OBJECT_TYPE; i++) {')
header_txt.append(' pTrav = pObjectHead[i];')
header_txt.append(' if (pTrav) {')
- header_txt.append(' printf("=====%s OBJECT LIST (%lu objs):\\n", string_XGL_OBJECT_TYPE(pTrav->obj.objType), numObjs[i]);')
+ header_txt.append(' printf("=====%s OBJECT LIST (%lu objs):\\n", string_VK_OBJECT_TYPE(pTrav->obj.objType), numObjs[i]);')
header_txt.append(' while (pTrav) {')
- header_txt.append(' printf(" ObjNode (%p) w/ %s obj %p has pNextObj %p\\n", (void*)pTrav, string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, (void*)pTrav->pNextObj);')
+ header_txt.append(' printf(" ObjNode (%p) w/ %s obj %p has pNextObj %p\\n", (void*)pTrav, string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, (void*)pTrav->pNextObj);')
header_txt.append(' pTrav = pTrav->pNextObj;')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append('}')
- header_txt.append('static void ll_insert_obj(void* pObj, XGL_OBJECT_TYPE objType) {')
+ header_txt.append('static void ll_insert_obj(void* pObj, VK_OBJECT_TYPE objType) {')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ[%llu] : CREATE %s object %p", object_track_index++, string_XGL_OBJECT_TYPE(objType), (void*)pObj);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ[%llu] : CREATE %s object %p", object_track_index++, string_VK_OBJECT_TYPE(objType), (void*)pObj);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
header_txt.append(' objNode* pNewObjNode = (objNode*)malloc(sizeof(objNode));')
header_txt.append(' pNewObjNode->obj.pObj = pObj;')
header_txt.append(' pNewObjNode->obj.objType = objType;')
header_txt.append(' // increment obj counts')
header_txt.append(' numObjs[objType]++;')
header_txt.append(' numTotalObjs++;')
- header_txt.append(' //sprintf(str, "OBJ_STAT : %lu total objs & %lu %s objs.", numTotalObjs, numObjs[objType], string_XGL_OBJECT_TYPE(objType));')
+ header_txt.append(' //sprintf(str, "OBJ_STAT : %lu total objs & %lu %s objs.", numTotalObjs, numObjs[objType], string_VK_OBJECT_TYPE(objType));')
header_txt.append(' if (0) ll_print_lists();')
header_txt.append('}')
header_txt.append('// Traverse global list and return type for given object')
- header_txt.append('static XGL_OBJECT_TYPE ll_get_obj_type(XGL_OBJECT object) {')
+ header_txt.append('static VK_OBJECT_TYPE ll_get_obj_type(VK_OBJECT object) {')
header_txt.append(' objNode *pTrav = pGlobalHead;')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == object)')
header_txt.append(' }')
header_txt.append(' char str[1024];')
header_txt.append(' sprintf(str, "Attempting look-up on obj %p but it is NOT in the global list!", (void*)object);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, object, 0, OBJTRACK_MISSING_OBJECT, "OBJTRACK", str);')
- header_txt.append(' return XGL_OBJECT_TYPE_UNKNOWN;')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, object, 0, OBJTRACK_MISSING_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' return VK_OBJECT_TYPE_UNKNOWN;')
header_txt.append('}')
header_txt.append('#if 0')
- header_txt.append('static uint64_t ll_get_obj_uses(void* pObj, XGL_OBJECT_TYPE objType) {')
+ header_txt.append('static uint64_t ll_get_obj_uses(void* pObj, VK_OBJECT_TYPE objType) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
header_txt.append(' return 0;')
header_txt.append('}')
header_txt.append('#endif')
- header_txt.append('static void ll_increment_use_count(void* pObj, XGL_OBJECT_TYPE objType) {')
+ header_txt.append('static void ll_increment_use_count(void* pObj, VK_OBJECT_TYPE objType) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
header_txt.append(' pTrav->obj.numUses++;')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ[%llu] : USING %s object %p (%lu total uses)", object_track_index++, string_XGL_OBJECT_TYPE(objType), (void*)pObj, pTrav->obj.numUses);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ[%llu] : USING %s object %p (%lu total uses)", object_track_index++, string_VK_OBJECT_TYPE(objType), (void*)pObj, pTrav->obj.numUses);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
header_txt.append(' return;')
header_txt.append(' }')
header_txt.append(' pTrav = pTrav->pNextObj;')
header_txt.append(' }')
header_txt.append(' // If we do not find obj, insert it and then increment count')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "Unable to increment count for obj %p, will add to list as %s type and increment count", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "Unable to increment count for obj %p, will add to list as %s type and increment count", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append('')
header_txt.append(' ll_insert_obj(pObj, objType);')
header_txt.append(' ll_increment_use_count(pObj, objType);')
header_txt.append('// We usually do not know Obj type when we destroy it so have to fetch')
header_txt.append('// Type from global list w/ ll_destroy_obj()')
header_txt.append('// and then do the full removal from both lists w/ ll_remove_obj_type()')
- header_txt.append('static void ll_remove_obj_type(void* pObj, XGL_OBJECT_TYPE objType) {')
+ header_txt.append('static void ll_remove_obj_type(void* pObj, VK_OBJECT_TYPE objType) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' objNode *pPrev = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' assert(numObjs[objType] > 0);')
header_txt.append(' numObjs[objType]--;')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ[%llu] : DESTROY %s object %p", object_track_index++, string_XGL_OBJECT_TYPE(objType), (void*)pObj);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ[%llu] : DESTROY %s object %p", object_track_index++, string_VK_OBJECT_TYPE(objType), (void*)pObj);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
header_txt.append(' return;')
header_txt.append(' }')
header_txt.append(' pPrev = pTrav;')
header_txt.append(' pTrav = pTrav->pNextObj;')
header_txt.append(' }')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ INTERNAL ERROR : Obj %p was in global list but not in %s list", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ INTERNAL ERROR : Obj %p was in global list but not in %s list", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_INTERNAL_ERROR, "OBJTRACK", str);')
header_txt.append('}')
header_txt.append('// Parse global list to find obj type, then remove obj from obj type list, finally')
header_txt.append('// remove obj from global list')
header_txt.append(' assert(numTotalObjs > 0);')
header_txt.append(' numTotalObjs--;')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJ_STAT Removed %s obj %p that was used %lu times (%lu total objs remain & %lu %s objs).", string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, pTrav->obj.numUses, numTotalObjs, numObjs[pTrav->obj.objType], string_XGL_OBJECT_TYPE(pTrav->obj.objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_UNKNOWN, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "OBJ_STAT Removed %s obj %p that was used %lu times (%lu total objs remain & %lu %s objs).", string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, pTrav->obj.numUses, numTotalObjs, numObjs[pTrav->obj.objType], string_VK_OBJECT_TYPE(pTrav->obj.objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_UNKNOWN, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_NONE, "OBJTRACK", str);')
header_txt.append(' free(pTrav);')
header_txt.append(' return;')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append(' char str[1024];')
header_txt.append(' sprintf(str, "Unable to remove obj %p. Was it created? Has it already been destroyed?", pObj);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_DESTROY_OBJECT_FAILED, "OBJTRACK", str);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_DESTROY_OBJECT_FAILED, "OBJTRACK", str);')
header_txt.append('}')
header_txt.append('// Set selected flag state for an object node')
- header_txt.append('static void set_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {')
+ header_txt.append('static void set_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {')
header_txt.append(' if (pObj != NULL) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' }')
header_txt.append(' // If we do not find it print an error')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "Unable to set status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "Unable to set status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append(' }');
header_txt.append('}')
header_txt.append('')
header_txt.append('// Track selected state for an object node')
- header_txt.append('static void track_object_status(void* pObj, XGL_STATE_BIND_POINT stateBindPoint) {')
- header_txt.append(' objNode *pTrav = pObjectHead[XGL_OBJECT_TYPE_CMD_BUFFER];')
+ header_txt.append('static void track_object_status(void* pObj, VK_STATE_BIND_POINT stateBindPoint) {')
+ header_txt.append(' objNode *pTrav = pObjectHead[VK_OBJECT_TYPE_CMD_BUFFER];')
header_txt.append('')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
- header_txt.append(' if (stateBindPoint == XGL_STATE_BIND_VIEWPORT) {')
+ header_txt.append(' if (stateBindPoint == VK_STATE_BIND_VIEWPORT) {')
header_txt.append(' pTrav->obj.status |= OBJSTATUS_VIEWPORT_BOUND;')
- header_txt.append(' } else if (stateBindPoint == XGL_STATE_BIND_RASTER) {')
+ header_txt.append(' } else if (stateBindPoint == VK_STATE_BIND_RASTER) {')
header_txt.append(' pTrav->obj.status |= OBJSTATUS_RASTER_BOUND;')
- header_txt.append(' } else if (stateBindPoint == XGL_STATE_BIND_COLOR_BLEND) {')
+ header_txt.append(' } else if (stateBindPoint == VK_STATE_BIND_COLOR_BLEND) {')
header_txt.append(' pTrav->obj.status |= OBJSTATUS_COLOR_BLEND_BOUND;')
- header_txt.append(' } else if (stateBindPoint == XGL_STATE_BIND_DEPTH_STENCIL) {')
+ header_txt.append(' } else if (stateBindPoint == VK_STATE_BIND_DEPTH_STENCIL) {')
header_txt.append(' pTrav->obj.status |= OBJSTATUS_DEPTH_STENCIL_BOUND;')
header_txt.append(' }')
header_txt.append(' return;')
header_txt.append(' // If we do not find it print an error')
header_txt.append(' char str[1024];')
header_txt.append(' sprintf(str, "Unable to track status for non-existent Command Buffer object %p", pObj);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append('}')
header_txt.append('')
header_txt.append('// Reset selected flag state for an object node')
- header_txt.append('static void reset_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {')
+ header_txt.append('static void reset_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_flag) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
header_txt.append(' }')
header_txt.append(' // If we do not find it print an error')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "Unable to reset status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "Unable to reset status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append('}')
header_txt.append('')
header_txt.append('// Check object status for selected flag state')
- header_txt.append('static bool32_t validate_status(void* pObj, XGL_OBJECT_TYPE objType, OBJECT_STATUS status_mask, OBJECT_STATUS status_flag, XGL_DBG_MSG_TYPE error_level, OBJECT_TRACK_ERROR error_code, char* fail_msg) {')
+ header_txt.append('static bool32_t validate_status(void* pObj, VK_OBJECT_TYPE objType, OBJECT_STATUS status_mask, OBJECT_STATUS status_flag, VK_DBG_MSG_TYPE error_level, OBJECT_TRACK_ERROR error_code, char* fail_msg) {')
header_txt.append(' objNode *pTrav = pObjectHead[objType];')
header_txt.append(' while (pTrav) {')
header_txt.append(' if (pTrav->obj.pObj == pObj) {')
header_txt.append(' if ((pTrav->obj.status & status_mask) != status_flag) {')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "OBJECT VALIDATION WARNING: %s object %p: %s", string_XGL_OBJECT_TYPE(objType), (void*)pObj, fail_msg);')
- header_txt.append(' layerCbMsg(error_level, XGL_VALIDATION_LEVEL_0, pObj, 0, error_code, "OBJTRACK", str);')
- header_txt.append(' return XGL_FALSE;')
+ header_txt.append(' sprintf(str, "OBJECT VALIDATION WARNING: %s object %p: %s", string_VK_OBJECT_TYPE(objType), (void*)pObj, fail_msg);')
+ header_txt.append(' layerCbMsg(error_level, VK_VALIDATION_LEVEL_0, pObj, 0, error_code, "OBJTRACK", str);')
+ header_txt.append(' return VK_FALSE;')
header_txt.append(' }')
- header_txt.append(' return XGL_TRUE;')
+ header_txt.append(' return VK_TRUE;')
header_txt.append(' }')
header_txt.append(' pTrav = pTrav->pNextObj;')
header_txt.append(' }')
- header_txt.append(' if (objType != XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY) {')
+ header_txt.append(' if (objType != VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY) {')
header_txt.append(' // If we do not find it print an error')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "Unable to obtain status for non-existent object %p of %s type", pObj, string_XGL_OBJECT_TYPE(objType));')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "Unable to obtain status for non-existent object %p of %s type", pObj, string_VK_OBJECT_TYPE(objType));')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, pObj, 0, OBJTRACK_UNKNOWN_OBJECT, "OBJTRACK", str);')
header_txt.append(' }')
- header_txt.append(' return XGL_FALSE;')
+ header_txt.append(' return VK_FALSE;')
header_txt.append('}')
header_txt.append('')
header_txt.append('static void validate_draw_state_flags(void* pObj) {')
- header_txt.append(' validate_status((void*)pObj, XGL_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_VIEWPORT_BOUND, OBJSTATUS_VIEWPORT_BOUND, XGL_DBG_MSG_ERROR, OBJTRACK_VIEWPORT_NOT_BOUND, "Viewport object not bound to this command buffer");')
- header_txt.append(' validate_status((void*)pObj, XGL_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_RASTER_BOUND, OBJSTATUS_RASTER_BOUND, XGL_DBG_MSG_ERROR, OBJTRACK_RASTER_NOT_BOUND, "Raster object not bound to this command buffer");')
- header_txt.append(' validate_status((void*)pObj, XGL_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_COLOR_BLEND_BOUND, OBJSTATUS_COLOR_BLEND_BOUND, XGL_DBG_MSG_UNKNOWN, OBJTRACK_COLOR_BLEND_NOT_BOUND, "Color-blend object not bound to this command buffer");')
- header_txt.append(' validate_status((void*)pObj, XGL_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_DEPTH_STENCIL_BOUND, OBJSTATUS_DEPTH_STENCIL_BOUND, XGL_DBG_MSG_UNKNOWN, OBJTRACK_DEPTH_STENCIL_NOT_BOUND, "Depth-stencil object not bound to this command buffer");')
+ header_txt.append(' validate_status((void*)pObj, VK_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_VIEWPORT_BOUND, OBJSTATUS_VIEWPORT_BOUND, VK_DBG_MSG_ERROR, OBJTRACK_VIEWPORT_NOT_BOUND, "Viewport object not bound to this command buffer");')
+ header_txt.append(' validate_status((void*)pObj, VK_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_RASTER_BOUND, OBJSTATUS_RASTER_BOUND, VK_DBG_MSG_ERROR, OBJTRACK_RASTER_NOT_BOUND, "Raster object not bound to this command buffer");')
+ header_txt.append(' validate_status((void*)pObj, VK_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_COLOR_BLEND_BOUND, OBJSTATUS_COLOR_BLEND_BOUND, VK_DBG_MSG_UNKNOWN, OBJTRACK_COLOR_BLEND_NOT_BOUND, "Color-blend object not bound to this command buffer");')
+ header_txt.append(' validate_status((void*)pObj, VK_OBJECT_TYPE_CMD_BUFFER, OBJSTATUS_DEPTH_STENCIL_BOUND, OBJSTATUS_DEPTH_STENCIL_BOUND, VK_DBG_MSG_UNKNOWN, OBJTRACK_DEPTH_STENCIL_NOT_BOUND, "Depth-stencil object not bound to this command buffer");')
header_txt.append('}')
header_txt.append('')
- header_txt.append('static void validate_memory_mapping_status(const XGL_GPU_MEMORY* pMemRefs, uint32_t numRefs) {')
+ header_txt.append('static void validate_memory_mapping_status(const VK_GPU_MEMORY* pMemRefs, uint32_t numRefs) {')
header_txt.append(' uint32_t i;')
header_txt.append(' for (i = 0; i < numRefs; i++) {')
header_txt.append(' if (pMemRefs[i]) {')
header_txt.append(' // If mem reference is in a presentable image memory list, skip the check of the GPU_MEMORY list')
- header_txt.append(' if (!validate_status((void *)pMemRefs[i], XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY, OBJSTATUS_NONE, OBJSTATUS_NONE, XGL_DBG_MSG_UNKNOWN, OBJTRACK_NONE, NULL) == XGL_TRUE)')
+ header_txt.append(' if (!validate_status((void *)pMemRefs[i], VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY, OBJSTATUS_NONE, OBJSTATUS_NONE, VK_DBG_MSG_UNKNOWN, OBJTRACK_NONE, NULL) == VK_TRUE)')
header_txt.append(' {')
- header_txt.append(' validate_status((void *)pMemRefs[i], XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED, OBJSTATUS_NONE, XGL_DBG_MSG_ERROR, OBJTRACK_GPU_MEM_MAPPED, "A Mapped Memory Object was referenced in a command buffer");')
+ header_txt.append(' validate_status((void *)pMemRefs[i], VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED, OBJSTATUS_NONE, VK_DBG_MSG_ERROR, OBJTRACK_GPU_MEM_MAPPED, "A Mapped Memory Object was referenced in a command buffer");')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append('static void validate_mem_ref_count(uint32_t numRefs) {')
header_txt.append(' if (maxMemReferences == 0) {')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "xglQueueSubmit called before calling xglGetGpuInfo");')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_WARNING, XGL_VALIDATION_LEVEL_0, NULL, 0, OBJTRACK_GETGPUINFO_NOT_CALLED, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "vkQueueSubmit called before calling vkGetGpuInfo");')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_WARNING, VK_VALIDATION_LEVEL_0, NULL, 0, OBJTRACK_GETGPUINFO_NOT_CALLED, "OBJTRACK", str);')
header_txt.append(' } else {')
header_txt.append(' if (numRefs > maxMemReferences) {')
header_txt.append(' char str[1024];')
- header_txt.append(' sprintf(str, "xglQueueSubmit Memory reference count (%d) exceeds allowable GPU limit (%d)", numRefs, maxMemReferences);')
- header_txt.append(' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, NULL, 0, OBJTRACK_MEMREFCOUNT_MAX_EXCEEDED, "OBJTRACK", str);')
+ header_txt.append(' sprintf(str, "vkQueueSubmit Memory reference count (%d) exceeds allowable GPU limit (%d)", numRefs, maxMemReferences);')
+ header_txt.append(' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, NULL, 0, OBJTRACK_MEMREFCOUNT_MAX_EXCEEDED, "OBJTRACK", str);')
header_txt.append(' }')
header_txt.append(' }')
header_txt.append('}')
header_txt.append('')
header_txt.append('static void setGpuQueueInfoState(void *pData) {')
- header_txt.append(' maxMemReferences = ((XGL_PHYSICAL_GPU_QUEUE_PROPERTIES *)pData)->maxMemReferences;')
+ header_txt.append(' maxMemReferences = ((VK_PHYSICAL_GPU_QUEUE_PROPERTIES *)pData)->maxMemReferences;')
header_txt.append('}')
return "\n".join(header_txt)
if proto.name in [ 'DbgRegisterMsgCallback', 'DbgUnregisterMsgCallback' ]:
# use default version
return None
- obj_type_mapping = {base_t : base_t.replace("XGL_", "XGL_OBJECT_TYPE_") for base_t in xgl.object_type_list}
+ obj_type_mapping = {base_t : base_t.replace("VK_", "VK_OBJECT_TYPE_") for base_t in xgl.object_type_list}
# For the various "super-types" we have to use function to distinguish sub type
- for obj_type in ["XGL_BASE_OBJECT", "XGL_OBJECT", "XGL_DYNAMIC_STATE_OBJECT"]:
+ for obj_type in ["VK_BASE_OBJECT", "VK_OBJECT", "VK_DYNAMIC_STATE_OBJECT"]:
obj_type_mapping[obj_type] = "ll_get_obj_type(object)"
- decl = proto.c_func(prefix="xgl", attr="XGLAPI")
+ decl = proto.c_func(prefix="vk", attr="VKAPI")
param0_name = proto.params[0].name
p0_type = proto.params[0].ty.strip('*').strip('const ')
create_line = ''
using_line += ' ll_increment_use_count((void*)%s, %s);\n' % (param0_name, obj_type_mapping[p0_type])
using_line += ' loader_platform_thread_unlock_mutex(&objLock);\n'
if 'QueueSubmit' in proto.name:
- using_line += ' set_status((void*)fence, XGL_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED);\n'
+ using_line += ' set_status((void*)fence, VK_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED);\n'
using_line += ' // TODO: Fix for updated memory reference mechanism\n'
using_line += ' // validate_memory_mapping_status(pMemRefs, memRefCount);\n'
using_line += ' // validate_mem_ref_count(memRefCount);\n'
elif 'GetFenceStatus' in proto.name:
using_line += ' // Warn if submitted_flag is not set\n'
- using_line += ' validate_status((void*)fence, XGL_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED, OBJSTATUS_FENCE_IS_SUBMITTED, XGL_DBG_MSG_ERROR, OBJTRACK_INVALID_FENCE, "Status Requested for Unsubmitted Fence");\n'
+ using_line += ' validate_status((void*)fence, VK_OBJECT_TYPE_FENCE, OBJSTATUS_FENCE_IS_SUBMITTED, OBJSTATUS_FENCE_IS_SUBMITTED, VK_DBG_MSG_ERROR, OBJTRACK_INVALID_FENCE, "Status Requested for Unsubmitted Fence");\n'
elif 'EndCommandBuffer' in proto.name:
- using_line += ' reset_status((void*)cmdBuffer, XGL_OBJECT_TYPE_CMD_BUFFER, (OBJSTATUS_VIEWPORT_BOUND |\n'
+ using_line += ' reset_status((void*)cmdBuffer, VK_OBJECT_TYPE_CMD_BUFFER, (OBJSTATUS_VIEWPORT_BOUND |\n'
using_line += ' OBJSTATUS_RASTER_BOUND |\n'
using_line += ' OBJSTATUS_COLOR_BLEND_BOUND |\n'
using_line += ' OBJSTATUS_DEPTH_STENCIL_BOUND));\n'
elif 'CmdDraw' in proto.name:
using_line += ' validate_draw_state_flags((void *)cmdBuffer);\n'
elif 'MapMemory' in proto.name:
- using_line += ' set_status((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);\n'
+ using_line += ' set_status((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);\n'
elif 'UnmapMemory' in proto.name:
- using_line += ' reset_status((void*)mem, XGL_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);\n'
+ using_line += ' reset_status((void*)mem, VK_OBJECT_TYPE_GPU_MEMORY, OBJSTATUS_GPU_MEM_MAPPED);\n'
if 'AllocDescriptor' in proto.name: # Allocates array of DSs
create_line = ' for (uint32_t i = 0; i < *pCount; i++) {\n'
create_line += ' loader_platform_thread_lock_mutex(&objLock);\n'
- create_line += ' ll_insert_obj((void*)pDescriptorSets[i], XGL_OBJECT_TYPE_DESCRIPTOR_SET);\n'
+ create_line += ' ll_insert_obj((void*)pDescriptorSets[i], VK_OBJECT_TYPE_DESCRIPTOR_SET);\n'
create_line += ' loader_platform_thread_unlock_mutex(&objLock);\n'
create_line += ' }\n'
elif 'CreatePresentableImage' in proto.name:
create_line = ' loader_platform_thread_lock_mutex(&objLock);\n'
create_line += ' ll_insert_obj((void*)*%s, %s);\n' % (proto.params[-2].name, obj_type_mapping[proto.params[-2].ty.strip('*').strip('const ')])
- create_line += ' ll_insert_obj((void*)*pMem, XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);\n'
- # create_line += ' ll_insert_obj((void*)*%s, XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);\n' % (obj_type_mapping[proto.params[-1].ty.strip('*').strip('const ')])
+ create_line += ' ll_insert_obj((void*)*pMem, VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);\n'
+ # create_line += ' ll_insert_obj((void*)*%s, VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY);\n' % (obj_type_mapping[proto.params[-1].ty.strip('*').strip('const ')])
create_line += ' loader_platform_thread_unlock_mutex(&objLock);\n'
elif 'Create' in proto.name or 'Alloc' in proto.name:
create_line = ' loader_platform_thread_lock_mutex(&objLock);\n'
using_line = ''
if 'DestroyDevice' in proto.name:
destroy_line += ' // Report any remaining objects in LL\n objNode *pTrav = pGlobalHead;\n while (pTrav) {\n'
- destroy_line += ' if (pTrav->obj.objType == XGL_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY) {\n'
+ destroy_line += ' if (pTrav->obj.objType == VK_OBJECT_TYPE_PRESENTABLE_IMAGE_MEMORY) {\n'
destroy_line += ' objNode *pDel = pTrav;\n'
destroy_line += ' pTrav = pTrav->pNextGlobal;\n'
destroy_line += ' ll_destroy_obj((void*)(pDel->obj.pObj));\n'
destroy_line += ' } else {\n'
destroy_line += ' char str[1024];\n'
- destroy_line += ' sprintf(str, "OBJ ERROR : %s object %p has not been destroyed (was used %lu times).", string_XGL_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, pTrav->obj.numUses);\n'
- destroy_line += ' layerCbMsg(XGL_DBG_MSG_ERROR, XGL_VALIDATION_LEVEL_0, device, 0, OBJTRACK_OBJECT_LEAK, "OBJTRACK", str);\n'
+ destroy_line += ' sprintf(str, "OBJ ERROR : %s object %p has not been destroyed (was used %lu times).", string_VK_OBJECT_TYPE(pTrav->obj.objType), pTrav->obj.pObj, pTrav->obj.numUses);\n'
+ destroy_line += ' layerCbMsg(VK_DBG_MSG_ERROR, VK_VALIDATION_LEVEL_0, device, 0, OBJTRACK_OBJECT_LEAK, "OBJTRACK", str);\n'
destroy_line += ' pTrav = pTrav->pNextGlobal;\n'
destroy_line += ' }\n'
destroy_line += ' }\n'
ret_val = ''
stmt = ''
if proto.ret != "void":
- ret_val = "XGL_RESULT result = "
+ ret_val = "VK_RESULT result = "
stmt = " return result;\n"
if 'WsiX11AssociateConnection' == proto.name:
funcs.append("#if defined(__linux__) || defined(XCB_NVIDIA)")
if proto.name == "EnumerateLayers":
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
' if (gpu != NULL) {\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
' %s'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
' %s'
' } else {\n'
' if (pOutLayerCount == NULL || pOutLayers == NULL || pOutLayers[0] == NULL)\n'
- ' return XGL_ERROR_INVALID_POINTER;\n'
+ ' return VK_ERROR_INVALID_POINTER;\n'
' // This layer compatible with all GPUs\n'
' *pOutLayerCount = 1;\n'
' strncpy((char *) pOutLayers[0], "%s", maxStringSize);\n'
- ' return XGL_SUCCESS;\n'
+ ' return VK_SUCCESS;\n'
' }\n'
'}' % (qual, decl, proto.params[0].name, using_line, self.layer_name, ret_val, c_call, create_line, destroy_line, stmt, self.layer_name))
elif 'GetExtensionSupport' == proto.name:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
- ' XGL_RESULT result;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_RESULT result;\n'
' /* This entrypoint is NOT going to init its own dispatch table since loader calls this early */\n'
' if (!strncmp(pExtName, "%s", strlen("%s")) ||\n'
' !strncmp(pExtName, "objTrackGetObjectCount", strlen("objTrackGetObjectCount")) ||\n'
' !strncmp(pExtName, "objTrackGetObjects", strlen("objTrackGetObjects")))\n'
' {\n'
- ' result = XGL_SUCCESS;\n'
+ ' result = VK_SUCCESS;\n'
' } else if (nextTable.GetExtensionSupport != NULL)\n'
' {\n'
' %s'
' result = nextTable.%s;\n'
' } else\n'
' {\n'
- ' result = XGL_ERROR_INVALID_EXTENSION;\n'
+ ' result = VK_ERROR_INVALID_EXTENSION;\n'
' }\n'
'%s'
'}' % (qual, decl, proto.params[0].name, self.layer_name, self.layer_name, using_line, c_call, stmt))
- elif proto.params[0].ty != "XGL_PHYSICAL_GPU":
+ elif proto.params[0].ty != "VK_PHYSICAL_GPU":
funcs.append('%s%s\n'
'{\n'
'%s'
'%s'
'}' % (qual, decl, using_line, ret_val, proto.c_call(), create_line, destroy_line, stmt))
else:
- c_call = proto.c_call().replace("(" + proto.params[0].name, "((XGL_PHYSICAL_GPU)gpuw->nextObject", 1)
+ c_call = proto.c_call().replace("(" + proto.params[0].name, "((VK_PHYSICAL_GPU)gpuw->nextObject", 1)
gpu_state = ''
if 'GetGpuInfo' in proto.name:
- gpu_state = ' if (infoType == XGL_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES) {\n'
+ gpu_state = ' if (infoType == VK_INFO_TYPE_PHYSICAL_GPU_QUEUE_PROPERTIES) {\n'
gpu_state += ' if (pData != NULL) {\n'
gpu_state += ' setGpuQueueInfoState(pData);\n'
gpu_state += ' }\n'
gpu_state += ' }\n'
funcs.append('%s%s\n'
'{\n'
- ' XGL_BASE_LAYER_OBJECT* gpuw = (XGL_BASE_LAYER_OBJECT *) %s;\n'
+ ' VK_BASE_LAYER_OBJECT* gpuw = (VK_BASE_LAYER_OBJECT *) %s;\n'
'%s'
' pCurObj = gpuw;\n'
' loader_platform_thread_once(&tabOnce, init%s);\n'
def generate_body(self):
self.layer_name = "ObjectTracker"
body = [self._generate_layer_initialization(True, lockname='obj'),
- self._generate_dispatch_entrypoints("XGL_LAYER_EXPORT"),
+ self._generate_dispatch_entrypoints("VK_LAYER_EXPORT"),
self._generate_extensions(),
self._generate_layer_gpa_function(extensions=['objTrackGetObjectCount', 'objTrackGetObjects'])]
print("Available subcommands are: %s" % " ".join(subcommands))
exit(1)
- hfp = xgl_helper.HeaderFileParser(sys.argv[2])
+ hfp = vk_helper.HeaderFileParser(sys.argv[2])
hfp.parse()
- xgl_helper.enum_val_dict = hfp.get_enum_val_dict()
- xgl_helper.enum_type_dict = hfp.get_enum_type_dict()
- xgl_helper.struct_dict = hfp.get_struct_dict()
- xgl_helper.typedef_fwd_dict = hfp.get_typedef_fwd_dict()
- xgl_helper.typedef_rev_dict = hfp.get_typedef_rev_dict()
- xgl_helper.types_dict = hfp.get_types_dict()
+ vk_helper.enum_val_dict = hfp.get_enum_val_dict()
+ vk_helper.enum_type_dict = hfp.get_enum_type_dict()
+ vk_helper.struct_dict = hfp.get_struct_dict()
+ vk_helper.typedef_fwd_dict = hfp.get_typedef_fwd_dict()
+ vk_helper.typedef_rev_dict = hfp.get_typedef_rev_dict()
+ vk_helper.types_dict = hfp.get_types_dict()
subcmd = subcommands[sys.argv[1]](sys.argv[2:])
subcmd.run()
-"""XGL API description"""
+"""VK API description"""
# Copyright (C) 2014 LunarG, Inc.
#
return "%s %s%s(%s)" % format_vals
def c_pretty_decl(self, name, attr=""):
- """Return a named declaration in C, with xgl.h formatting."""
+ """Return a named declaration in C, with vulkan.h formatting."""
plist = []
for param in self.params:
idx = param.ty.find("[")
return "%s(%s)" % (self.name, self.c_params(need_type=False))
def object_in_params(self):
- """Return the params that are simple XGL objects and are inputs."""
+ """Return the params that are simple VK objects and are inputs."""
return [param for param in self.params if param.ty in objects]
def object_out_params(self):
- """Return the params that are simple XGL objects and are outputs."""
+ """Return the params that are simple VK objects and are outputs."""
return [param for param in self.params
if param.dereferenced_type() in objects]
return "\n".join(lines)
-# XGL core API
+# VK core API
core = Extension(
- name="XGL_CORE",
- headers=["xgl.h", "xglDbg.h"],
+ name="VK_CORE",
+ headers=["vulkan.h", "xglDbg.h"],
objects=[
- "XGL_INSTANCE",
- "XGL_PHYSICAL_GPU",
- "XGL_BASE_OBJECT",
- "XGL_DEVICE",
- "XGL_QUEUE",
- "XGL_GPU_MEMORY",
- "XGL_OBJECT",
- "XGL_BUFFER",
- "XGL_BUFFER_VIEW",
- "XGL_IMAGE",
- "XGL_IMAGE_VIEW",
- "XGL_COLOR_ATTACHMENT_VIEW",
- "XGL_DEPTH_STENCIL_VIEW",
- "XGL_SHADER",
- "XGL_PIPELINE",
- "XGL_SAMPLER",
- "XGL_DESCRIPTOR_SET",
- "XGL_DESCRIPTOR_SET_LAYOUT",
- "XGL_DESCRIPTOR_SET_LAYOUT_CHAIN",
- "XGL_DESCRIPTOR_POOL",
- "XGL_DYNAMIC_STATE_OBJECT",
- "XGL_DYNAMIC_VP_STATE_OBJECT",
- "XGL_DYNAMIC_RS_STATE_OBJECT",
- "XGL_DYNAMIC_CB_STATE_OBJECT",
- "XGL_DYNAMIC_DS_STATE_OBJECT",
- "XGL_CMD_BUFFER",
- "XGL_FENCE",
- "XGL_SEMAPHORE",
- "XGL_EVENT",
- "XGL_QUERY_POOL",
- "XGL_FRAMEBUFFER",
- "XGL_RENDER_PASS",
+ "VK_INSTANCE",
+ "VK_PHYSICAL_GPU",
+ "VK_BASE_OBJECT",
+ "VK_DEVICE",
+ "VK_QUEUE",
+ "VK_GPU_MEMORY",
+ "VK_OBJECT",
+ "VK_BUFFER",
+ "VK_BUFFER_VIEW",
+ "VK_IMAGE",
+ "VK_IMAGE_VIEW",
+ "VK_COLOR_ATTACHMENT_VIEW",
+ "VK_DEPTH_STENCIL_VIEW",
+ "VK_SHADER",
+ "VK_PIPELINE",
+ "VK_SAMPLER",
+ "VK_DESCRIPTOR_SET",
+ "VK_DESCRIPTOR_SET_LAYOUT",
+ "VK_DESCRIPTOR_SET_LAYOUT_CHAIN",
+ "VK_DESCRIPTOR_POOL",
+ "VK_DYNAMIC_STATE_OBJECT",
+ "VK_DYNAMIC_VP_STATE_OBJECT",
+ "VK_DYNAMIC_RS_STATE_OBJECT",
+ "VK_DYNAMIC_CB_STATE_OBJECT",
+ "VK_DYNAMIC_DS_STATE_OBJECT",
+ "VK_CMD_BUFFER",
+ "VK_FENCE",
+ "VK_SEMAPHORE",
+ "VK_EVENT",
+ "VK_QUERY_POOL",
+ "VK_FRAMEBUFFER",
+ "VK_RENDER_PASS",
],
protos=[
- Proto("XGL_RESULT", "CreateInstance",
- [Param("const XGL_INSTANCE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_INSTANCE*", "pInstance")]),
+ Proto("VK_RESULT", "CreateInstance",
+ [Param("const VK_INSTANCE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_INSTANCE*", "pInstance")]),
- Proto("XGL_RESULT", "DestroyInstance",
- [Param("XGL_INSTANCE", "instance")]),
+ Proto("VK_RESULT", "DestroyInstance",
+ [Param("VK_INSTANCE", "instance")]),
- Proto("XGL_RESULT", "EnumerateGpus",
- [Param("XGL_INSTANCE", "instance"),
+ Proto("VK_RESULT", "EnumerateGpus",
+ [Param("VK_INSTANCE", "instance"),
Param("uint32_t", "maxGpus"),
Param("uint32_t*", "pGpuCount"),
- Param("XGL_PHYSICAL_GPU*", "pGpus")]),
+ Param("VK_PHYSICAL_GPU*", "pGpus")]),
- Proto("XGL_RESULT", "GetGpuInfo",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
- Param("XGL_PHYSICAL_GPU_INFO_TYPE", "infoType"),
+ Proto("VK_RESULT", "GetGpuInfo",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
+ Param("VK_PHYSICAL_GPU_INFO_TYPE", "infoType"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
Proto("void*", "GetProcAddr",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
+ [Param("VK_PHYSICAL_GPU", "gpu"),
Param("const char*", "pName")]),
- Proto("XGL_RESULT", "CreateDevice",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
- Param("const XGL_DEVICE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DEVICE*", "pDevice")]),
+ Proto("VK_RESULT", "CreateDevice",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
+ Param("const VK_DEVICE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DEVICE*", "pDevice")]),
- Proto("XGL_RESULT", "DestroyDevice",
- [Param("XGL_DEVICE", "device")]),
+ Proto("VK_RESULT", "DestroyDevice",
+ [Param("VK_DEVICE", "device")]),
- Proto("XGL_RESULT", "GetExtensionSupport",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
+ Proto("VK_RESULT", "GetExtensionSupport",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
Param("const char*", "pExtName")]),
- Proto("XGL_RESULT", "EnumerateLayers",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
+ Proto("VK_RESULT", "EnumerateLayers",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
Param("size_t", "maxLayerCount"),
Param("size_t", "maxStringSize"),
Param("size_t*", "pOutLayerCount"),
Param("char* const*", "pOutLayers"),
Param("void*", "pReserved")]),
- Proto("XGL_RESULT", "GetDeviceQueue",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "GetDeviceQueue",
+ [Param("VK_DEVICE", "device"),
Param("uint32_t", "queueNodeIndex"),
Param("uint32_t", "queueIndex"),
- Param("XGL_QUEUE*", "pQueue")]),
+ Param("VK_QUEUE*", "pQueue")]),
- Proto("XGL_RESULT", "QueueSubmit",
- [Param("XGL_QUEUE", "queue"),
+ Proto("VK_RESULT", "QueueSubmit",
+ [Param("VK_QUEUE", "queue"),
Param("uint32_t", "cmdBufferCount"),
- Param("const XGL_CMD_BUFFER*", "pCmdBuffers"),
- Param("XGL_FENCE", "fence")]),
+ Param("const VK_CMD_BUFFER*", "pCmdBuffers"),
+ Param("VK_FENCE", "fence")]),
- Proto("XGL_RESULT", "QueueAddMemReference",
- [Param("XGL_QUEUE", "queue"),
- Param("XGL_GPU_MEMORY", "mem")]),
+ Proto("VK_RESULT", "QueueAddMemReference",
+ [Param("VK_QUEUE", "queue"),
+ Param("VK_GPU_MEMORY", "mem")]),
- Proto("XGL_RESULT", "QueueRemoveMemReference",
- [Param("XGL_QUEUE", "queue"),
- Param("XGL_GPU_MEMORY", "mem")]),
+ Proto("VK_RESULT", "QueueRemoveMemReference",
+ [Param("VK_QUEUE", "queue"),
+ Param("VK_GPU_MEMORY", "mem")]),
- Proto("XGL_RESULT", "QueueWaitIdle",
- [Param("XGL_QUEUE", "queue")]),
+ Proto("VK_RESULT", "QueueWaitIdle",
+ [Param("VK_QUEUE", "queue")]),
- Proto("XGL_RESULT", "DeviceWaitIdle",
- [Param("XGL_DEVICE", "device")]),
+ Proto("VK_RESULT", "DeviceWaitIdle",
+ [Param("VK_DEVICE", "device")]),
- Proto("XGL_RESULT", "AllocMemory",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_MEMORY_ALLOC_INFO*", "pAllocInfo"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
+ Proto("VK_RESULT", "AllocMemory",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_MEMORY_ALLOC_INFO*", "pAllocInfo"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
- Proto("XGL_RESULT", "FreeMemory",
- [Param("XGL_GPU_MEMORY", "mem")]),
+ Proto("VK_RESULT", "FreeMemory",
+ [Param("VK_GPU_MEMORY", "mem")]),
- Proto("XGL_RESULT", "SetMemoryPriority",
- [Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_MEMORY_PRIORITY", "priority")]),
+ Proto("VK_RESULT", "SetMemoryPriority",
+ [Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_MEMORY_PRIORITY", "priority")]),
- Proto("XGL_RESULT", "MapMemory",
- [Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_FLAGS", "flags"),
+ Proto("VK_RESULT", "MapMemory",
+ [Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_FLAGS", "flags"),
Param("void**", "ppData")]),
- Proto("XGL_RESULT", "UnmapMemory",
- [Param("XGL_GPU_MEMORY", "mem")]),
+ Proto("VK_RESULT", "UnmapMemory",
+ [Param("VK_GPU_MEMORY", "mem")]),
- Proto("XGL_RESULT", "PinSystemMemory",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "PinSystemMemory",
+ [Param("VK_DEVICE", "device"),
Param("const void*", "pSysMem"),
Param("size_t", "memSize"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
-
- Proto("XGL_RESULT", "GetMultiGpuCompatibility",
- [Param("XGL_PHYSICAL_GPU", "gpu0"),
- Param("XGL_PHYSICAL_GPU", "gpu1"),
- Param("XGL_GPU_COMPATIBILITY_INFO*", "pInfo")]),
-
- Proto("XGL_RESULT", "OpenSharedMemory",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_MEMORY_OPEN_INFO*", "pOpenInfo"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
-
- Proto("XGL_RESULT", "OpenSharedSemaphore",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_SEMAPHORE_OPEN_INFO*", "pOpenInfo"),
- Param("XGL_SEMAPHORE*", "pSemaphore")]),
-
- Proto("XGL_RESULT", "OpenPeerMemory",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_PEER_MEMORY_OPEN_INFO*", "pOpenInfo"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
-
- Proto("XGL_RESULT", "OpenPeerImage",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_PEER_IMAGE_OPEN_INFO*", "pOpenInfo"),
- Param("XGL_IMAGE*", "pImage"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
-
- Proto("XGL_RESULT", "DestroyObject",
- [Param("XGL_OBJECT", "object")]),
-
- Proto("XGL_RESULT", "GetObjectInfo",
- [Param("XGL_BASE_OBJECT", "object"),
- Param("XGL_OBJECT_INFO_TYPE", "infoType"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
+
+ Proto("VK_RESULT", "GetMultiGpuCompatibility",
+ [Param("VK_PHYSICAL_GPU", "gpu0"),
+ Param("VK_PHYSICAL_GPU", "gpu1"),
+ Param("VK_GPU_COMPATIBILITY_INFO*", "pInfo")]),
+
+ Proto("VK_RESULT", "OpenSharedMemory",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_MEMORY_OPEN_INFO*", "pOpenInfo"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
+
+ Proto("VK_RESULT", "OpenSharedSemaphore",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_SEMAPHORE_OPEN_INFO*", "pOpenInfo"),
+ Param("VK_SEMAPHORE*", "pSemaphore")]),
+
+ Proto("VK_RESULT", "OpenPeerMemory",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_PEER_MEMORY_OPEN_INFO*", "pOpenInfo"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
+
+ Proto("VK_RESULT", "OpenPeerImage",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_PEER_IMAGE_OPEN_INFO*", "pOpenInfo"),
+ Param("VK_IMAGE*", "pImage"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
+
+ Proto("VK_RESULT", "DestroyObject",
+ [Param("VK_OBJECT", "object")]),
+
+ Proto("VK_RESULT", "GetObjectInfo",
+ [Param("VK_BASE_OBJECT", "object"),
+ Param("VK_OBJECT_INFO_TYPE", "infoType"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "BindObjectMemory",
- [Param("XGL_OBJECT", "object"),
+ Proto("VK_RESULT", "BindObjectMemory",
+ [Param("VK_OBJECT", "object"),
Param("uint32_t", "allocationIdx"),
- Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_GPU_SIZE", "offset")]),
+ Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_GPU_SIZE", "offset")]),
- Proto("XGL_RESULT", "BindObjectMemoryRange",
- [Param("XGL_OBJECT", "object"),
+ Proto("VK_RESULT", "BindObjectMemoryRange",
+ [Param("VK_OBJECT", "object"),
Param("uint32_t", "allocationIdx"),
- Param("XGL_GPU_SIZE", "rangeOffset"),
- Param("XGL_GPU_SIZE", "rangeSize"),
- Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_GPU_SIZE", "memOffset")]),
+ Param("VK_GPU_SIZE", "rangeOffset"),
+ Param("VK_GPU_SIZE", "rangeSize"),
+ Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_GPU_SIZE", "memOffset")]),
- Proto("XGL_RESULT", "BindImageMemoryRange",
- [Param("XGL_IMAGE", "image"),
+ Proto("VK_RESULT", "BindImageMemoryRange",
+ [Param("VK_IMAGE", "image"),
Param("uint32_t", "allocationIdx"),
- Param("const XGL_IMAGE_MEMORY_BIND_INFO*", "bindInfo"),
- Param("XGL_GPU_MEMORY", "mem"),
- Param("XGL_GPU_SIZE", "memOffset")]),
+ Param("const VK_IMAGE_MEMORY_BIND_INFO*", "bindInfo"),
+ Param("VK_GPU_MEMORY", "mem"),
+ Param("VK_GPU_SIZE", "memOffset")]),
- Proto("XGL_RESULT", "CreateFence",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_FENCE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_FENCE*", "pFence")]),
+ Proto("VK_RESULT", "CreateFence",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_FENCE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_FENCE*", "pFence")]),
- Proto("XGL_RESULT", "ResetFences",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "ResetFences",
+ [Param("VK_DEVICE", "device"),
Param("uint32_t", "fenceCount"),
- Param("XGL_FENCE*", "pFences")]),
+ Param("VK_FENCE*", "pFences")]),
- Proto("XGL_RESULT", "GetFenceStatus",
- [Param("XGL_FENCE", "fence")]),
+ Proto("VK_RESULT", "GetFenceStatus",
+ [Param("VK_FENCE", "fence")]),
- Proto("XGL_RESULT", "WaitForFences",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "WaitForFences",
+ [Param("VK_DEVICE", "device"),
Param("uint32_t", "fenceCount"),
- Param("const XGL_FENCE*", "pFences"),
+ Param("const VK_FENCE*", "pFences"),
Param("bool32_t", "waitAll"),
Param("uint64_t", "timeout")]),
- Proto("XGL_RESULT", "CreateSemaphore",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_SEMAPHORE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_SEMAPHORE*", "pSemaphore")]),
+ Proto("VK_RESULT", "CreateSemaphore",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_SEMAPHORE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_SEMAPHORE*", "pSemaphore")]),
- Proto("XGL_RESULT", "QueueSignalSemaphore",
- [Param("XGL_QUEUE", "queue"),
- Param("XGL_SEMAPHORE", "semaphore")]),
+ Proto("VK_RESULT", "QueueSignalSemaphore",
+ [Param("VK_QUEUE", "queue"),
+ Param("VK_SEMAPHORE", "semaphore")]),
- Proto("XGL_RESULT", "QueueWaitSemaphore",
- [Param("XGL_QUEUE", "queue"),
- Param("XGL_SEMAPHORE", "semaphore")]),
+ Proto("VK_RESULT", "QueueWaitSemaphore",
+ [Param("VK_QUEUE", "queue"),
+ Param("VK_SEMAPHORE", "semaphore")]),
- Proto("XGL_RESULT", "CreateEvent",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_EVENT_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_EVENT*", "pEvent")]),
+ Proto("VK_RESULT", "CreateEvent",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_EVENT_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_EVENT*", "pEvent")]),
- Proto("XGL_RESULT", "GetEventStatus",
- [Param("XGL_EVENT", "event")]),
+ Proto("VK_RESULT", "GetEventStatus",
+ [Param("VK_EVENT", "event")]),
- Proto("XGL_RESULT", "SetEvent",
- [Param("XGL_EVENT", "event")]),
+ Proto("VK_RESULT", "SetEvent",
+ [Param("VK_EVENT", "event")]),
- Proto("XGL_RESULT", "ResetEvent",
- [Param("XGL_EVENT", "event")]),
+ Proto("VK_RESULT", "ResetEvent",
+ [Param("VK_EVENT", "event")]),
- Proto("XGL_RESULT", "CreateQueryPool",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_QUERY_POOL_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_QUERY_POOL*", "pQueryPool")]),
+ Proto("VK_RESULT", "CreateQueryPool",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_QUERY_POOL_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_QUERY_POOL*", "pQueryPool")]),
- Proto("XGL_RESULT", "GetQueryPoolResults",
- [Param("XGL_QUERY_POOL", "queryPool"),
+ Proto("VK_RESULT", "GetQueryPoolResults",
+ [Param("VK_QUERY_POOL", "queryPool"),
Param("uint32_t", "startQuery"),
Param("uint32_t", "queryCount"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "GetFormatInfo",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_FORMAT", "format"),
- Param("XGL_FORMAT_INFO_TYPE", "infoType"),
+ Proto("VK_RESULT", "GetFormatInfo",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_FORMAT", "format"),
+ Param("VK_FORMAT_INFO_TYPE", "infoType"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "CreateBuffer",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_BUFFER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_BUFFER*", "pBuffer")]),
-
- Proto("XGL_RESULT", "CreateBufferView",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_BUFFER_VIEW_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_BUFFER_VIEW*", "pView")]),
-
- Proto("XGL_RESULT", "CreateImage",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_IMAGE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_IMAGE*", "pImage")]),
-
- Proto("XGL_RESULT", "GetImageSubresourceInfo",
- [Param("XGL_IMAGE", "image"),
- Param("const XGL_IMAGE_SUBRESOURCE*", "pSubresource"),
- Param("XGL_SUBRESOURCE_INFO_TYPE", "infoType"),
+ Proto("VK_RESULT", "CreateBuffer",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_BUFFER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_BUFFER*", "pBuffer")]),
+
+ Proto("VK_RESULT", "CreateBufferView",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_BUFFER_VIEW_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_BUFFER_VIEW*", "pView")]),
+
+ Proto("VK_RESULT", "CreateImage",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_IMAGE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_IMAGE*", "pImage")]),
+
+ Proto("VK_RESULT", "GetImageSubresourceInfo",
+ [Param("VK_IMAGE", "image"),
+ Param("const VK_IMAGE_SUBRESOURCE*", "pSubresource"),
+ Param("VK_SUBRESOURCE_INFO_TYPE", "infoType"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "CreateImageView",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_IMAGE_VIEW_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_IMAGE_VIEW*", "pView")]),
-
- Proto("XGL_RESULT", "CreateColorAttachmentView",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_COLOR_ATTACHMENT_VIEW_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_COLOR_ATTACHMENT_VIEW*", "pView")]),
-
- Proto("XGL_RESULT", "CreateDepthStencilView",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DEPTH_STENCIL_VIEW_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DEPTH_STENCIL_VIEW*", "pView")]),
-
- Proto("XGL_RESULT", "CreateShader",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_SHADER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_SHADER*", "pShader")]),
-
- Proto("XGL_RESULT", "CreateGraphicsPipeline",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_GRAPHICS_PIPELINE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_PIPELINE*", "pPipeline")]),
-
- Proto("XGL_RESULT", "CreateGraphicsPipelineDerivative",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_GRAPHICS_PIPELINE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_PIPELINE", "basePipeline"),
- Param("XGL_PIPELINE*", "pPipeline")]),
-
- Proto("XGL_RESULT", "CreateComputePipeline",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_COMPUTE_PIPELINE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_PIPELINE*", "pPipeline")]),
-
- Proto("XGL_RESULT", "StorePipeline",
- [Param("XGL_PIPELINE", "pipeline"),
+ Proto("VK_RESULT", "CreateImageView",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_IMAGE_VIEW_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_IMAGE_VIEW*", "pView")]),
+
+ Proto("VK_RESULT", "CreateColorAttachmentView",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_COLOR_ATTACHMENT_VIEW_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_COLOR_ATTACHMENT_VIEW*", "pView")]),
+
+ Proto("VK_RESULT", "CreateDepthStencilView",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DEPTH_STENCIL_VIEW_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DEPTH_STENCIL_VIEW*", "pView")]),
+
+ Proto("VK_RESULT", "CreateShader",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_SHADER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_SHADER*", "pShader")]),
+
+ Proto("VK_RESULT", "CreateGraphicsPipeline",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_GRAPHICS_PIPELINE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_PIPELINE*", "pPipeline")]),
+
+ Proto("VK_RESULT", "CreateGraphicsPipelineDerivative",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_GRAPHICS_PIPELINE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_PIPELINE", "basePipeline"),
+ Param("VK_PIPELINE*", "pPipeline")]),
+
+ Proto("VK_RESULT", "CreateComputePipeline",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_COMPUTE_PIPELINE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_PIPELINE*", "pPipeline")]),
+
+ Proto("VK_RESULT", "StorePipeline",
+ [Param("VK_PIPELINE", "pipeline"),
Param("size_t*", "pDataSize"),
Param("void*", "pData")]),
- Proto("XGL_RESULT", "LoadPipeline",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "LoadPipeline",
+ [Param("VK_DEVICE", "device"),
Param("size_t", "dataSize"),
Param("const void*", "pData"),
- Param("XGL_PIPELINE*", "pPipeline")]),
+ Param("VK_PIPELINE*", "pPipeline")]),
- Proto("XGL_RESULT", "LoadPipelineDerivative",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "LoadPipelineDerivative",
+ [Param("VK_DEVICE", "device"),
Param("size_t", "dataSize"),
Param("const void*", "pData"),
- Param("XGL_PIPELINE", "basePipeline"),
- Param("XGL_PIPELINE*", "pPipeline")]),
+ Param("VK_PIPELINE", "basePipeline"),
+ Param("VK_PIPELINE*", "pPipeline")]),
- Proto("XGL_RESULT", "CreateSampler",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_SAMPLER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_SAMPLER*", "pSampler")]),
+ Proto("VK_RESULT", "CreateSampler",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_SAMPLER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_SAMPLER*", "pSampler")]),
- Proto("XGL_RESULT", "CreateDescriptorSetLayout",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DESCRIPTOR_SET_LAYOUT_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DESCRIPTOR_SET_LAYOUT*", "pSetLayout")]),
+ Proto("VK_RESULT", "CreateDescriptorSetLayout",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DESCRIPTOR_SET_LAYOUT_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DESCRIPTOR_SET_LAYOUT*", "pSetLayout")]),
- Proto("XGL_RESULT", "CreateDescriptorSetLayoutChain",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "CreateDescriptorSetLayoutChain",
+ [Param("VK_DEVICE", "device"),
Param("uint32_t", "setLayoutArrayCount"),
- Param("const XGL_DESCRIPTOR_SET_LAYOUT*", "pSetLayoutArray"),
- Param("XGL_DESCRIPTOR_SET_LAYOUT_CHAIN*", "pLayoutChain")]),
+ Param("const VK_DESCRIPTOR_SET_LAYOUT*", "pSetLayoutArray"),
+ Param("VK_DESCRIPTOR_SET_LAYOUT_CHAIN*", "pLayoutChain")]),
- Proto("XGL_RESULT", "BeginDescriptorPoolUpdate",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_DESCRIPTOR_UPDATE_MODE", "updateMode")]),
+ Proto("VK_RESULT", "BeginDescriptorPoolUpdate",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_DESCRIPTOR_UPDATE_MODE", "updateMode")]),
- Proto("XGL_RESULT", "EndDescriptorPoolUpdate",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_CMD_BUFFER", "cmd")]),
+ Proto("VK_RESULT", "EndDescriptorPoolUpdate",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_CMD_BUFFER", "cmd")]),
- Proto("XGL_RESULT", "CreateDescriptorPool",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_DESCRIPTOR_POOL_USAGE", "poolUsage"),
+ Proto("VK_RESULT", "CreateDescriptorPool",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_DESCRIPTOR_POOL_USAGE", "poolUsage"),
Param("uint32_t", "maxSets"),
- Param("const XGL_DESCRIPTOR_POOL_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DESCRIPTOR_POOL*", "pDescriptorPool")]),
+ Param("const VK_DESCRIPTOR_POOL_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DESCRIPTOR_POOL*", "pDescriptorPool")]),
- Proto("XGL_RESULT", "ResetDescriptorPool",
- [Param("XGL_DESCRIPTOR_POOL", "descriptorPool")]),
+ Proto("VK_RESULT", "ResetDescriptorPool",
+ [Param("VK_DESCRIPTOR_POOL", "descriptorPool")]),
- Proto("XGL_RESULT", "AllocDescriptorSets",
- [Param("XGL_DESCRIPTOR_POOL", "descriptorPool"),
- Param("XGL_DESCRIPTOR_SET_USAGE", "setUsage"),
+ Proto("VK_RESULT", "AllocDescriptorSets",
+ [Param("VK_DESCRIPTOR_POOL", "descriptorPool"),
+ Param("VK_DESCRIPTOR_SET_USAGE", "setUsage"),
Param("uint32_t", "count"),
- Param("const XGL_DESCRIPTOR_SET_LAYOUT*", "pSetLayouts"),
- Param("XGL_DESCRIPTOR_SET*", "pDescriptorSets"),
+ Param("const VK_DESCRIPTOR_SET_LAYOUT*", "pSetLayouts"),
+ Param("VK_DESCRIPTOR_SET*", "pDescriptorSets"),
Param("uint32_t*", "pCount")]),
Proto("void", "ClearDescriptorSets",
- [Param("XGL_DESCRIPTOR_POOL", "descriptorPool"),
+ [Param("VK_DESCRIPTOR_POOL", "descriptorPool"),
Param("uint32_t", "count"),
- Param("const XGL_DESCRIPTOR_SET*", "pDescriptorSets")]),
+ Param("const VK_DESCRIPTOR_SET*", "pDescriptorSets")]),
Proto("void", "UpdateDescriptors",
- [Param("XGL_DESCRIPTOR_SET", "descriptorSet"),
+ [Param("VK_DESCRIPTOR_SET", "descriptorSet"),
Param("uint32_t", "updateCount"),
Param("const void**", "ppUpdateArray")]),
- Proto("XGL_RESULT", "CreateDynamicViewportState",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DYNAMIC_VP_STATE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DYNAMIC_VP_STATE_OBJECT*", "pState")]),
+ Proto("VK_RESULT", "CreateDynamicViewportState",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DYNAMIC_VP_STATE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DYNAMIC_VP_STATE_OBJECT*", "pState")]),
- Proto("XGL_RESULT", "CreateDynamicRasterState",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DYNAMIC_RS_STATE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DYNAMIC_RS_STATE_OBJECT*", "pState")]),
+ Proto("VK_RESULT", "CreateDynamicRasterState",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DYNAMIC_RS_STATE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DYNAMIC_RS_STATE_OBJECT*", "pState")]),
- Proto("XGL_RESULT", "CreateDynamicColorBlendState",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DYNAMIC_CB_STATE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DYNAMIC_CB_STATE_OBJECT*", "pState")]),
+ Proto("VK_RESULT", "CreateDynamicColorBlendState",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DYNAMIC_CB_STATE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DYNAMIC_CB_STATE_OBJECT*", "pState")]),
- Proto("XGL_RESULT", "CreateDynamicDepthStencilState",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_DYNAMIC_DS_STATE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_DYNAMIC_DS_STATE_OBJECT*", "pState")]),
+ Proto("VK_RESULT", "CreateDynamicDepthStencilState",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_DYNAMIC_DS_STATE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_DYNAMIC_DS_STATE_OBJECT*", "pState")]),
- Proto("XGL_RESULT", "CreateCommandBuffer",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_CMD_BUFFER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_CMD_BUFFER*", "pCmdBuffer")]),
+ Proto("VK_RESULT", "CreateCommandBuffer",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_CMD_BUFFER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_CMD_BUFFER*", "pCmdBuffer")]),
- Proto("XGL_RESULT", "BeginCommandBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("const XGL_CMD_BUFFER_BEGIN_INFO*", "pBeginInfo")]),
+ Proto("VK_RESULT", "BeginCommandBuffer",
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("const VK_CMD_BUFFER_BEGIN_INFO*", "pBeginInfo")]),
- Proto("XGL_RESULT", "EndCommandBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer")]),
+ Proto("VK_RESULT", "EndCommandBuffer",
+ [Param("VK_CMD_BUFFER", "cmdBuffer")]),
- Proto("XGL_RESULT", "ResetCommandBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer")]),
+ Proto("VK_RESULT", "ResetCommandBuffer",
+ [Param("VK_CMD_BUFFER", "cmdBuffer")]),
Proto("void", "CmdBindPipeline",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
- Param("XGL_PIPELINE", "pipeline")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ Param("VK_PIPELINE", "pipeline")]),
Proto("void", "CmdBindDynamicStateObject",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_STATE_BIND_POINT", "stateBindPoint"),
- Param("XGL_DYNAMIC_STATE_OBJECT", "state")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_STATE_BIND_POINT", "stateBindPoint"),
+ Param("VK_DYNAMIC_STATE_OBJECT", "state")]),
Proto("void", "CmdBindDescriptorSets",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
- Param("XGL_DESCRIPTOR_SET_LAYOUT_CHAIN", "layoutChain"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ Param("VK_DESCRIPTOR_SET_LAYOUT_CHAIN", "layoutChain"),
Param("uint32_t", "layoutChainSlot"),
Param("uint32_t", "count"),
- Param("const XGL_DESCRIPTOR_SET*", "pDescriptorSets"),
+ Param("const VK_DESCRIPTOR_SET*", "pDescriptorSets"),
Param("const uint32_t*", "pUserData")]),
Proto("void", "CmdBindVertexBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset"),
Param("uint32_t", "binding")]),
Proto("void", "CmdBindIndexBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset"),
- Param("XGL_INDEX_TYPE", "indexType")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset"),
+ Param("VK_INDEX_TYPE", "indexType")]),
Proto("void", "CmdDraw",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
Param("uint32_t", "firstVertex"),
Param("uint32_t", "vertexCount"),
Param("uint32_t", "firstInstance"),
Param("uint32_t", "instanceCount")]),
Proto("void", "CmdDrawIndexed",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
Param("uint32_t", "firstIndex"),
Param("uint32_t", "indexCount"),
Param("int32_t", "vertexOffset"),
Param("uint32_t", "instanceCount")]),
Proto("void", "CmdDrawIndirect",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset"),
Param("uint32_t", "count"),
Param("uint32_t", "stride")]),
Proto("void", "CmdDrawIndexedIndirect",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset"),
Param("uint32_t", "count"),
Param("uint32_t", "stride")]),
Proto("void", "CmdDispatch",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
Param("uint32_t", "x"),
Param("uint32_t", "y"),
Param("uint32_t", "z")]),
Proto("void", "CmdDispatchIndirect",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "buffer"),
- Param("XGL_GPU_SIZE", "offset")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "buffer"),
+ Param("VK_GPU_SIZE", "offset")]),
Proto("void", "CmdCopyBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "srcBuffer"),
- Param("XGL_BUFFER", "destBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "srcBuffer"),
+ Param("VK_BUFFER", "destBuffer"),
Param("uint32_t", "regionCount"),
- Param("const XGL_BUFFER_COPY*", "pRegions")]),
+ Param("const VK_BUFFER_COPY*", "pRegions")]),
Proto("void", "CmdCopyImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout"),
Param("uint32_t", "regionCount"),
- Param("const XGL_IMAGE_COPY*", "pRegions")]),
+ Param("const VK_IMAGE_COPY*", "pRegions")]),
Proto("void", "CmdBlitImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout"),
Param("uint32_t", "regionCount"),
- Param("const XGL_IMAGE_BLIT*", "pRegions")]),
+ Param("const VK_IMAGE_BLIT*", "pRegions")]),
Proto("void", "CmdCopyBufferToImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "srcBuffer"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "srcBuffer"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout"),
Param("uint32_t", "regionCount"),
- Param("const XGL_BUFFER_IMAGE_COPY*", "pRegions")]),
+ Param("const VK_BUFFER_IMAGE_COPY*", "pRegions")]),
Proto("void", "CmdCopyImageToBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_BUFFER", "destBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_BUFFER", "destBuffer"),
Param("uint32_t", "regionCount"),
- Param("const XGL_BUFFER_IMAGE_COPY*", "pRegions")]),
+ Param("const VK_BUFFER_IMAGE_COPY*", "pRegions")]),
Proto("void", "CmdCloneImageData",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout")]),
Proto("void", "CmdUpdateBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "destBuffer"),
- Param("XGL_GPU_SIZE", "destOffset"),
- Param("XGL_GPU_SIZE", "dataSize"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "destBuffer"),
+ Param("VK_GPU_SIZE", "destOffset"),
+ Param("VK_GPU_SIZE", "dataSize"),
Param("const uint32_t*", "pData")]),
Proto("void", "CmdFillBuffer",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_BUFFER", "destBuffer"),
- Param("XGL_GPU_SIZE", "destOffset"),
- Param("XGL_GPU_SIZE", "fillSize"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_BUFFER", "destBuffer"),
+ Param("VK_GPU_SIZE", "destOffset"),
+ Param("VK_GPU_SIZE", "fillSize"),
Param("uint32_t", "data")]),
Proto("void", "CmdClearColorImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "image"),
- Param("XGL_IMAGE_LAYOUT", "imageLayout"),
- Param("XGL_CLEAR_COLOR", "color"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "image"),
+ Param("VK_IMAGE_LAYOUT", "imageLayout"),
+ Param("VK_CLEAR_COLOR", "color"),
Param("uint32_t", "rangeCount"),
- Param("const XGL_IMAGE_SUBRESOURCE_RANGE*", "pRanges")]),
+ Param("const VK_IMAGE_SUBRESOURCE_RANGE*", "pRanges")]),
Proto("void", "CmdClearDepthStencil",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "image"),
- Param("XGL_IMAGE_LAYOUT", "imageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "image"),
+ Param("VK_IMAGE_LAYOUT", "imageLayout"),
Param("float", "depth"),
Param("uint32_t", "stencil"),
Param("uint32_t", "rangeCount"),
- Param("const XGL_IMAGE_SUBRESOURCE_RANGE*", "pRanges")]),
+ Param("const VK_IMAGE_SUBRESOURCE_RANGE*", "pRanges")]),
Proto("void", "CmdResolveImage",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_IMAGE", "srcImage"),
- Param("XGL_IMAGE_LAYOUT", "srcImageLayout"),
- Param("XGL_IMAGE", "destImage"),
- Param("XGL_IMAGE_LAYOUT", "destImageLayout"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_IMAGE", "srcImage"),
+ Param("VK_IMAGE_LAYOUT", "srcImageLayout"),
+ Param("VK_IMAGE", "destImage"),
+ Param("VK_IMAGE_LAYOUT", "destImageLayout"),
Param("uint32_t", "rectCount"),
- Param("const XGL_IMAGE_RESOLVE*", "pRects")]),
+ Param("const VK_IMAGE_RESOLVE*", "pRects")]),
Proto("void", "CmdSetEvent",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_EVENT", "event"),
- Param("XGL_PIPE_EVENT", "pipeEvent")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_EVENT", "event"),
+ Param("VK_PIPE_EVENT", "pipeEvent")]),
Proto("void", "CmdResetEvent",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_EVENT", "event"),
- Param("XGL_PIPE_EVENT", "pipeEvent")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_EVENT", "event"),
+ Param("VK_PIPE_EVENT", "pipeEvent")]),
Proto("void", "CmdWaitEvents",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("const XGL_EVENT_WAIT_INFO*", "pWaitInfo")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("const VK_EVENT_WAIT_INFO*", "pWaitInfo")]),
Proto("void", "CmdPipelineBarrier",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("const XGL_PIPELINE_BARRIER*", "pBarrier")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("const VK_PIPELINE_BARRIER*", "pBarrier")]),
Proto("void", "CmdBeginQuery",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_QUERY_POOL", "queryPool"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_QUERY_POOL", "queryPool"),
Param("uint32_t", "slot"),
- Param("XGL_FLAGS", "flags")]),
+ Param("VK_FLAGS", "flags")]),
Proto("void", "CmdEndQuery",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_QUERY_POOL", "queryPool"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_QUERY_POOL", "queryPool"),
Param("uint32_t", "slot")]),
Proto("void", "CmdResetQueryPool",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_QUERY_POOL", "queryPool"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_QUERY_POOL", "queryPool"),
Param("uint32_t", "startQuery"),
Param("uint32_t", "queryCount")]),
Proto("void", "CmdWriteTimestamp",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_TIMESTAMP_TYPE", "timestampType"),
- Param("XGL_BUFFER", "destBuffer"),
- Param("XGL_GPU_SIZE", "destOffset")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_TIMESTAMP_TYPE", "timestampType"),
+ Param("VK_BUFFER", "destBuffer"),
+ Param("VK_GPU_SIZE", "destOffset")]),
Proto("void", "CmdInitAtomicCounters",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
Param("uint32_t", "startCounter"),
Param("uint32_t", "counterCount"),
Param("const uint32_t*", "pData")]),
Proto("void", "CmdLoadAtomicCounters",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
Param("uint32_t", "startCounter"),
Param("uint32_t", "counterCount"),
- Param("XGL_BUFFER", "srcBuffer"),
- Param("XGL_GPU_SIZE", "srcOffset")]),
+ Param("VK_BUFFER", "srcBuffer"),
+ Param("VK_GPU_SIZE", "srcOffset")]),
Proto("void", "CmdSaveAtomicCounters",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_PIPELINE_BIND_POINT", "pipelineBindPoint"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_PIPELINE_BIND_POINT", "pipelineBindPoint"),
Param("uint32_t", "startCounter"),
Param("uint32_t", "counterCount"),
- Param("XGL_BUFFER", "destBuffer"),
- Param("XGL_GPU_SIZE", "destOffset")]),
+ Param("VK_BUFFER", "destBuffer"),
+ Param("VK_GPU_SIZE", "destOffset")]),
- Proto("XGL_RESULT", "CreateFramebuffer",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_FRAMEBUFFER_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_FRAMEBUFFER*", "pFramebuffer")]),
+ Proto("VK_RESULT", "CreateFramebuffer",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_FRAMEBUFFER_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_FRAMEBUFFER*", "pFramebuffer")]),
- Proto("XGL_RESULT", "CreateRenderPass",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_RENDER_PASS_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_RENDER_PASS*", "pRenderPass")]),
+ Proto("VK_RESULT", "CreateRenderPass",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_RENDER_PASS_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_RENDER_PASS*", "pRenderPass")]),
Proto("void", "CmdBeginRenderPass",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("const XGL_RENDER_PASS_BEGIN*", "pRenderPassBegin")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("const VK_RENDER_PASS_BEGIN*", "pRenderPassBegin")]),
Proto("void", "CmdEndRenderPass",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
- Param("XGL_RENDER_PASS", "renderPass")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
+ Param("VK_RENDER_PASS", "renderPass")]),
- Proto("XGL_RESULT", "DbgSetValidationLevel",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_VALIDATION_LEVEL", "validationLevel")]),
+ Proto("VK_RESULT", "DbgSetValidationLevel",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_VALIDATION_LEVEL", "validationLevel")]),
- Proto("XGL_RESULT", "DbgRegisterMsgCallback",
- [Param("XGL_INSTANCE", "instance"),
- Param("XGL_DBG_MSG_CALLBACK_FUNCTION", "pfnMsgCallback"),
+ Proto("VK_RESULT", "DbgRegisterMsgCallback",
+ [Param("VK_INSTANCE", "instance"),
+ Param("VK_DBG_MSG_CALLBACK_FUNCTION", "pfnMsgCallback"),
Param("void*", "pUserData")]),
- Proto("XGL_RESULT", "DbgUnregisterMsgCallback",
- [Param("XGL_INSTANCE", "instance"),
- Param("XGL_DBG_MSG_CALLBACK_FUNCTION", "pfnMsgCallback")]),
+ Proto("VK_RESULT", "DbgUnregisterMsgCallback",
+ [Param("VK_INSTANCE", "instance"),
+ Param("VK_DBG_MSG_CALLBACK_FUNCTION", "pfnMsgCallback")]),
- Proto("XGL_RESULT", "DbgSetMessageFilter",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "DbgSetMessageFilter",
+ [Param("VK_DEVICE", "device"),
Param("int32_t", "msgCode"),
- Param("XGL_DBG_MSG_FILTER", "filter")]),
+ Param("VK_DBG_MSG_FILTER", "filter")]),
- Proto("XGL_RESULT", "DbgSetObjectTag",
- [Param("XGL_BASE_OBJECT", "object"),
+ Proto("VK_RESULT", "DbgSetObjectTag",
+ [Param("VK_BASE_OBJECT", "object"),
Param("size_t", "tagSize"),
Param("const void*", "pTag")]),
- Proto("XGL_RESULT", "DbgSetGlobalOption",
- [Param("XGL_INSTANCE", "instance"),
- Param("XGL_DBG_GLOBAL_OPTION", "dbgOption"),
+ Proto("VK_RESULT", "DbgSetGlobalOption",
+ [Param("VK_INSTANCE", "instance"),
+ Param("VK_DBG_GLOBAL_OPTION", "dbgOption"),
Param("size_t", "dataSize"),
Param("const void*", "pData")]),
- Proto("XGL_RESULT", "DbgSetDeviceOption",
- [Param("XGL_DEVICE", "device"),
- Param("XGL_DBG_DEVICE_OPTION", "dbgOption"),
+ Proto("VK_RESULT", "DbgSetDeviceOption",
+ [Param("VK_DEVICE", "device"),
+ Param("VK_DBG_DEVICE_OPTION", "dbgOption"),
Param("size_t", "dataSize"),
Param("const void*", "pData")]),
Proto("void", "CmdDbgMarkerBegin",
- [Param("XGL_CMD_BUFFER", "cmdBuffer"),
+ [Param("VK_CMD_BUFFER", "cmdBuffer"),
Param("const char*", "pMarker")]),
Proto("void", "CmdDbgMarkerEnd",
- [Param("XGL_CMD_BUFFER", "cmdBuffer")]),
+ [Param("VK_CMD_BUFFER", "cmdBuffer")]),
],
)
wsi_x11 = Extension(
- name="XGL_WSI_X11",
+ name="VK_WSI_X11",
headers=["xglWsiX11Ext.h"],
objects=[],
protos=[
- Proto("XGL_RESULT", "WsiX11AssociateConnection",
- [Param("XGL_PHYSICAL_GPU", "gpu"),
- Param("const XGL_WSI_X11_CONNECTION_INFO*", "pConnectionInfo")]),
+ Proto("VK_RESULT", "WsiX11AssociateConnection",
+ [Param("VK_PHYSICAL_GPU", "gpu"),
+ Param("const VK_WSI_X11_CONNECTION_INFO*", "pConnectionInfo")]),
- Proto("XGL_RESULT", "WsiX11GetMSC",
- [Param("XGL_DEVICE", "device"),
+ Proto("VK_RESULT", "WsiX11GetMSC",
+ [Param("VK_DEVICE", "device"),
Param("xcb_window_t", "window"),
Param("xcb_randr_crtc_t", "crtc"),
Param("uint64_t*", "pMsc")]),
- Proto("XGL_RESULT", "WsiX11CreatePresentableImage",
- [Param("XGL_DEVICE", "device"),
- Param("const XGL_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO*", "pCreateInfo"),
- Param("XGL_IMAGE*", "pImage"),
- Param("XGL_GPU_MEMORY*", "pMem")]),
+ Proto("VK_RESULT", "WsiX11CreatePresentableImage",
+ [Param("VK_DEVICE", "device"),
+ Param("const VK_WSI_X11_PRESENTABLE_IMAGE_CREATE_INFO*", "pCreateInfo"),
+ Param("VK_IMAGE*", "pImage"),
+ Param("VK_GPU_MEMORY*", "pMem")]),
- Proto("XGL_RESULT", "WsiX11QueuePresent",
- [Param("XGL_QUEUE", "queue"),
- Param("const XGL_WSI_X11_PRESENT_INFO*", "pPresentInfo"),
- Param("XGL_FENCE", "fence")]),
+ Proto("VK_RESULT", "WsiX11QueuePresent",
+ [Param("VK_QUEUE", "queue"),
+ Param("const VK_WSI_X11_PRESENT_INFO*", "pPresentInfo"),
+ Param("VK_FENCE", "fence")]),
],
)
extensions = [core, wsi_x11]
object_root_list = [
- "XGL_INSTANCE",
- "XGL_PHYSICAL_GPU",
- "XGL_BASE_OBJECT"
+ "VK_INSTANCE",
+ "VK_PHYSICAL_GPU",
+ "VK_BASE_OBJECT"
]
object_base_list = [
- "XGL_DEVICE",
- "XGL_QUEUE",
- "XGL_GPU_MEMORY",
- "XGL_OBJECT"
+ "VK_DEVICE",
+ "VK_QUEUE",
+ "VK_GPU_MEMORY",
+ "VK_OBJECT"
]
object_list = [
- "XGL_BUFFER",
- "XGL_BUFFER_VIEW",
- "XGL_IMAGE",
- "XGL_IMAGE_VIEW",
- "XGL_COLOR_ATTACHMENT_VIEW",
- "XGL_DEPTH_STENCIL_VIEW",
- "XGL_SHADER",
- "XGL_PIPELINE",
- "XGL_SAMPLER",
- "XGL_DESCRIPTOR_SET",
- "XGL_DESCRIPTOR_SET_LAYOUT",
- "XGL_DESCRIPTOR_SET_LAYOUT_CHAIN",
- "XGL_DESCRIPTOR_POOL",
- "XGL_DYNAMIC_STATE_OBJECT",
- "XGL_CMD_BUFFER",
- "XGL_FENCE",
- "XGL_SEMAPHORE",
- "XGL_EVENT",
- "XGL_QUERY_POOL",
- "XGL_FRAMEBUFFER",
- "XGL_RENDER_PASS"
+ "VK_BUFFER",
+ "VK_BUFFER_VIEW",
+ "VK_IMAGE",
+ "VK_IMAGE_VIEW",
+ "VK_COLOR_ATTACHMENT_VIEW",
+ "VK_DEPTH_STENCIL_VIEW",
+ "VK_SHADER",
+ "VK_PIPELINE",
+ "VK_SAMPLER",
+ "VK_DESCRIPTOR_SET",
+ "VK_DESCRIPTOR_SET_LAYOUT",
+ "VK_DESCRIPTOR_SET_LAYOUT_CHAIN",
+ "VK_DESCRIPTOR_POOL",
+ "VK_DYNAMIC_STATE_OBJECT",
+ "VK_CMD_BUFFER",
+ "VK_FENCE",
+ "VK_SEMAPHORE",
+ "VK_EVENT",
+ "VK_QUERY_POOL",
+ "VK_FRAMEBUFFER",
+ "VK_RENDER_PASS"
]
object_dynamic_state_list = [
- "XGL_DYNAMIC_VP_STATE_OBJECT",
- "XGL_DYNAMIC_RS_STATE_OBJECT",
- "XGL_DYNAMIC_CB_STATE_OBJECT",
- "XGL_DYNAMIC_DS_STATE_OBJECT"
+ "VK_DYNAMIC_VP_STATE_OBJECT",
+ "VK_DYNAMIC_RS_STATE_OBJECT",
+ "VK_DYNAMIC_CB_STATE_OBJECT",
+ "VK_DYNAMIC_DS_STATE_OBJECT"
]
object_type_list = object_root_list + object_base_list + object_list + object_dynamic_state_list
-object_parent_list = ["XGL_BASE_OBJECT", "XGL_OBJECT", "XGL_DYNAMIC_STATE_OBJECT"]
+object_parent_list = ["VK_BASE_OBJECT", "VK_OBJECT", "VK_DYNAMIC_STATE_OBJECT"]
headers = []
objects = []
with open(filename, "r") as fp:
for line in fp:
line = line.strip()
- if line.startswith("XGL_DEFINE"):
+ if line.startswith("VK_DEFINE"):
begin = line.find("(") + 1
end = line.find(",")
# extract the object type
# parse proto_lines to protos
protos = []
for line in proto_lines:
- first, rest = line.split(" (XGLAPI *xgl")
+ first, rest = line.split(" (VKAPI *vk")
second, third = rest.split("Type)(")
# get the return type, no space before "*"
protos.append(Proto(proto_ret, proto_name, params))
# make them an extension and print
- ext = Extension("XGL_CORE",
- headers=["xgl.h", "xglDbg.h"],
+ ext = Extension("VK_CORE",
+ headers=["vulkan.h", "xglDbg.h"],
objects=object_lines,
protos=protos)
print("core =", str(ext))
print("")
- print("typedef struct _XGL_LAYER_DISPATCH_TABLE")
+ print("typedef struct _VK_LAYER_DISPATCH_TABLE")
print("{")
for proto in ext.protos:
- print(" xgl%sType %s;" % (proto.name, proto.name))
- print("} XGL_LAYER_DISPATCH_TABLE;")
+ print(" vk%sType %s;" % (proto.name, proto.name))
+ print("} VK_LAYER_DISPATCH_TABLE;")
if __name__ == "__main__":
- parse_xgl_h("include/xgl.h")
+ parse_xgl_h("include/vulkan.h")